title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 11. Pushing a container to a registry and embedding it into an image
Chapter 11. Pushing a container to a registry and embedding it into an image With RHEL image builder, you can build security hardened images using the OpenSCAP tool. You can take advantage of the support for container customization in the blueprints to create a container and embed it directly into the image you create. 11.1. Blueprint customization to embed a container into an image To embed a container from registry.access.redhat.com registry, you must add a container customization to your blueprint. For example: source - Mandatory field. It is a reference to the container image at a registry. This example uses the registry.access.redhat.com registry. You can specify a tag version. The default tag version is latest . name - The name of the container in the local registry. tls-verify - Boolean field. The tls-verify boolean field controls the transport layer security. The default value is true . RHEL image builder pulls the container during the image build and stores the container into the image. The default local container storage location depends on the image type, so that all support container-tools , such as Podman, are able to work with it. The embedded containers are not started. To access protected container resources, you can use a containers-auth.json file. 11.2. The Container registry credentials The [email protected] is a template service that can start multiple service instances. By default, the osbuild-composer service always starts with only one local osbuild-worker , specifically [email protected] . The osbuild-worker service is responsible for the communication with the container registry. To enable the service, set up the /etc/osbuild-worker/osbuild-worker.toml configuration file. Note After setting the /etc/osbuild-worker/osbuild-worker.toml configuration file, you must restart the osbuild-worker service, because it reads the /etc/osbuild-worker/osbuild-worker.toml configuration file only once, during the osbuild-worker service start. To stop the service instance, restart the systemd service with the following command: With that, you restart all the started instances of osbuild-worker , specifically [email protected] , the only service that might be running. The /etc/osbuild-worker/osbuild-worker.toml configuration file has a containers section with an auth_field_path entry that is a string referring to a path of a containers-auth.json file to be used for accessing protected resources. The container registry credentials are only used to pull a container image from a registry, when embedding the container into the image. For example: Additional resources The containers-auth.json man page on your system 11.3. Pushing a container artifact directly to a container registry You can push container artifacts, such as RHEL for Edge container images directly, directly to a container registry after you build it, by using the RHEL image builder CLI. Prerequisites Access to quay.io registry . This example uses the quay.io container registry as a target registry, but you can use a container registry of your choice. Procedure Set up a registry-config.toml file to select the container provider. The credentials are optional. Create a blueprint in the .toml format. This is a blueprint for the container in which you install an nginx package into the blueprint. Push the blueprint: Build the container image, by passing the registry and the repository to the composer-cli tool as arguments. simple-container - is the blueprint name. container - is the image type. "quay.io:8080/osbuild/ repository " - quay.io is the target registry, osbuild is the organization and repository is the location to push the container when it finishes building. Optionally, you can set a tag . If you do not set a value for :tag , it uses :latest tag by default. Note Building the container image takes time because of resolving dependencies of the customized packages. After the image build finishes, the container you created is available in quay.io . Verification Open quay.io . and click Repository Tags . Copy the manifest ID value to build the image in which you want to embed a container. Additional resources Quay.io - Working with tags 11.4. Building an image and pulling the container into the image After you have created the container image, you can build your customized image and pull the container image into it. For that, you must specify a container customization in the blueprint, and the container name for the final image. During the build process, the container image is fetched and placed in the local Podman container storage. Prerequisites You created a container image and pushed it into your local quay.io container registry instance. See Pushing a container artifact directly to a container registry . You have access to registry.access.redhat.com . You have a container manifest ID . You have the qemu-kvm and qemu-img packages installed. Procedure Create a blueprint to build a qcow2 image. The blueprint must contain the " " customization. Push the blueprint: Build the container image: image is the blueprint name. qcow2 is the image type. Note Building the image takes time because it checks the container on quay.io registry. To check the status of the compose: A finished compose shows the FINISHED status value. To identify your compose in the list, use its UUID. After the compose process is finished, download the resulting image file to your default download location: Replace UUID with the UUID value shown in the steps. You can use the qcow2 image you created and downloaded to create a VM. Verification From the resulting qcow2 image that you downloaded, perform the following steps: Start the qcow2 image in a VM. See Creating a virtual machine from a KVM guest image . The qemu wizard opens. Login in to the qcow2 image. Enter the username and password. These can be the username and password you set up in the .qcow2 blueprint in the "customizations.user" section, or created at boot time with cloud-init . Run the container image and open a shell prompt inside the container: registry.access.redhat.com is the target registry, osbuild is the organization and repository is the location to push the container when it finishes building. Check that the packages you added to the blueprint are available: The output shows you the nginx package path. Additional resources Red Hat Container Registry Authentication Accessing and Configuring the Red Hat Registry Basic Podman commands Running Skopeo in a container
[ "[[containers]] source = \"registry.access.redhat.com/ubi9/ubi:latest\" name = \"local-name\" tls-verify = true", "systemctl restart osbuild-worker@*", "[containers] auth_file_path = \"/etc/osbuild-worker/containers-auth.json\"", "provider = \" container_provider \" [settings] tls_verify = false username = \" admin \" password = \" your_password \"", "name = \"simple-container\" description = \"Simple RHEL container\" version = \"0.0.1\" [[packages]] name = \"nginx\" version = \"*\"", "composer-cli blueprints push blueprint.toml", "composer-cli compose start simple-container container \"quay.io:8080/osbuild/ repository \" registry-config.toml", "You can see details about the container you created, such as: - last modified - image size - the `manifest ID`, that you can copy to the clipboard.", "name = \"image\" description = \"A qcow2 image with a container\" version = \"0.0.1\" distro = \"rhel-90\" [[packages]] name = \"podman\" version = \"*\" [[containers]] source = \"registry.access.redhat.com/ubi9:8080/osbuild/container/container-image@sha256:manifest-ID-from-Repository-tag: tag-version\" name = \"source-name\" tls-verify = true", "composer-cli blueprints push blueprint-image .toml", "composer-cli start compose image qcow2", "composer-cli compose status", "composer-cli compose image UUID", "podman run -it registry.access.redhat.com/ubi9:8080/osbuild/ repository /bin/bash/", "type -a nginx" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/assembly_pushing-a-container-to-a-register-and-embedding-it-into-a-image_composing-a-customized-rhel-system-image
Appendix B. Glossary of Terms
Appendix B. Glossary of Terms This glossary documents various terms used in relation to Red Hat Satellite. Activation Key A token for host registration and subscription attachment. Activation keys define subscriptions, products, content views, and other parameters to be associated with a newly created host. Answer File A configuration file that defines settings for an installation scenario. Answer files are defined in the YAML format and stored in the /etc/foreman-installer/scenarios.d/ directory. ARF Report The result of an OpenSCAP audit. Summarizes the security compliance of hosts managed by Red Hat Satellite. Audits Provide a report on changes made by a specific user. Audits can be viewed in the Satellite web UI under Monitor > Audits . Baseboard Management Controller (BMC) Enables remote power management of bare-metal hosts. In Satellite, you can create a BMC interface to manage selected hosts. Boot Disk An ISO image used for PXE-less provisioning. This ISO enables the host to connect to Satellite Server, boot the installation media, and install the operating system. There are several kinds of boot disks: host image , full host image , generic image , and subnet image . Capsule (Capsule Server) An additional server that can be used in a Red Hat Satellite deployment to facilitate content federation and distribution (act as a Pulp mirror), and to run other localized services (Puppet server, DHCP , DNS , TFTP , and more). Capsules are useful for Satellite deployment across various geographical locations. In upstream Foreman terminology, Capsule is referred to as Smart Proxy. Catalog A document that describes the desired system state for one specific host managed by Puppet. It lists all of the resources that need to be managed, as well as any dependencies between those resources. Catalogs are compiled by a Puppet server from Puppet Manifests and data from Puppet Agents. Candlepin A service within Katello responsible for subscription management. Compliance Policy Refers to a scheduled task executed on Satellite Server that checks the specified hosts for compliance against SCAP content. Compute Profile Specifies default attributes for new virtual machines on a compute resource. Compute Resource A virtual or cloud infrastructure, which Red Hat Satellite uses for deployment of hosts and systems. Examples include Red Hat Virtualization, Red Hat OpenStack Platform, EC2, and VMWare. Container (Docker Container) An isolated application sandbox that contains all runtime dependencies required by an application. Satellite supports container provisioning on a dedicated compute resource. Container Image A static snapshot of the container's configuration. Satellite supports various methods of importing container images as well as distributing images to hosts through content views. Content A general term for everything Satellite distributes to hosts. Includes software packages (RPM files), or Docker images. Content is synchronized into the Library and then promoted into life cycle environments using content views so that they can be consumed by hosts. Content Delivery Network (CDN) The mechanism used to deliver Red Hat content to Satellite Server. Content Host The part of a host that manages tasks related to content and subscriptions. Content View A subset of Library content created by intelligent filtering. Once a content view is published, it can be promoted through the life cycle environment path, or modified using incremental upgrades. Discovered Host A bare-metal host detected on the provisioning network by the Discovery plug-in. Discovery Image Refers to the minimal operating system based on Red Hat Enterprise Linux that is PXE-booted on hosts to acquire initial hardware information and to communicate with Satellite Server before starting the provisioning process. Discovery Plug-in Enables automatic bare-metal discovery of unknown hosts on the provisioning network. The plug-in consists of three components: services running on Satellite Server and Capsule Server, and the Discovery image running on host. Discovery Rule A set of predefined provisioning rules which assigns a host group to discovered hosts and triggers provisioning automatically. Docker Tag A mark used to differentiate container images, typically by the version of the application stored in the image. In the Satellite web UI, you can filter images by tag under Content > Docker Tags . ERB Embedded Ruby (ERB) is a template syntax used in provisioning and job templates. Errata Updated RPM packages containing security fixes, bug fixes, and enhancements. In relationship to a host, erratum is applicable if it updates a package installed on the host and installable if it is present in the host's content view (which means it is accessible for installation on the host). External Node Classifier A construct that provides additional data for a server to use when configuring hosts. Red Hat Satellite acts as an External Node Classifier to Puppet servers in a Satellite deployment. Note that the External Node Classifier will be removed in the Satellite version. Facter A program that provides information (facts) about the system on which it is run; for example, Facter can report total memory, operating system version, architecture, and more. Puppet modules enable specific configurations based on host data gathered by Facter. Facts Host parameters such as total memory, operating system version, or architecture. Facts are reported by Facter and used by Puppet. Foreman The component mainly responsible for provisioning and content life cycle management. Foreman is the main upstream counterpart of Red Hat Satellite. Satellite services A set of services that Satellite Server and Capsule Servers use for operation. You can use the satellite-maintain tool to manage these services. To see the full list of services, enter the satellite-maintain service list command on the machine where Satellite or Capsule Server is installed. Foreman Hook An executable that is automatically triggered when an orchestration event occurs, such as when a host is created or when provisioning of a host has completed. Note that Foreman Hook functionality is deprecated and will be removed in the Satellite version. Full Host Image A boot disk used for PXE-less provisioning of a specific host. The full host image contains an embedded Linux kernel and init RAM disk of the associated operating system installer. Generic Image A boot disk for PXE-less provisioning that is not tied to a specific host. The generic image sends the host's MAC address to Satellite Server, which matches it against the host entry. Hammer A command line tool for managing Red Hat Satellite. You can execute Hammer commands from the command line or utilize them in scripts. Hammer also provides an interactive shell. Host Refers to any system, either physical or virtual, that Red Hat Satellite manages. Host Collection A user defined group of one or more Hosts used for bulk actions such as errata installation. Host Group A template for building a host. Host groups hold shared parameters, such as subnet or life cycle environment, that are inherited by host group members. Host groups can be nested to create a hierarchical structure. Host Image A boot disk used for PXE-less provisioning of a specific host. The host image only contains the boot files necessary to access the installation media on Satellite Server. Incremental Upgrade (of a Content View) The act of creating a new (minor) content view version in a life cycle environment. Incremental upgrades provide a way to make in-place modification of an already published content view. Useful for rapid updates, for example when applying security errata. Job A command executed remotely on a host from Satellite Server. Every job is defined in a job template. Job Template Defines properties of a job. Katello A Foreman plug-in responsible for subscription and repository management. Lazy Sync The ability to change a yum repository's default download policy of Immediate to On Demand . The On Demand setting saves storage space and synchronization time by only downloading the packages when requested by a client. Location A collection of default settings that represent a physical place. Library A container for content from all synchronized repositories on Satellite Server. Libraries exist by default for each organization as the root of every life cycle environment path and the source of content for every content view. Life Cycle Environment A container for content view versions consumed by the content hosts. A Life Cycle Environment represents a step in the life cycle environment path. Content moves through life cycle environments by publishing and promoting content views. Life Cycle Environment Path A sequence of life cycle environments through which the content views are promoted. You can promote a content view through a typical promotion path; for example, from development to test to production. Manifest (Red Hat Subscription Manifest) A mechanism for transferring subscriptions from the Red Hat Customer Portal to Red Hat Satellite. Do not confuse with Puppet Manifest . OpenSCAP A project implementing security compliance auditing according to the Security Content Automation Protocol (SCAP). OpenSCAP is integrated in Satellite to provide compliance auditing for managed hosts. Organization An isolated collection of systems, content, and other functionality within a Satellite deployment. Parameter Defines the behavior of Red Hat Satellite components during provisioning. Depending on the parameter scope, we distinguish between global, domain, host group, and host parameters. Depending on the parameter complexity, we distinguish between simple parameters (key-value pair) and smart parameters (conditional arguments, validation, overrides). Parametrized Class (Smart Class Parameter) A parameter created by importing a class from Puppet server. Permission Defines an action related to a selected part of Satellite infrastructure (resource type). Each resource type is associated with a set of permissions, for example the Architecture resource type has the following permissions: view_architectures , create_architectures , edit_architectures , and destroy_architectures . You can group permissions into roles and associate them with users or user groups. Product A collection of content repositories. Products are either provided by Red Hat CDN or created by the Satellite administrator to group custom repositories. Promote (a Content View) The act of moving a content view from one life cycle environment to another. Provisioning Template Defines host provisioning settings. Provisioning templates can be associated with host groups, life cycle environments, or operating systems. Publish (a Content View) The act of making a content view version available in a life cycle environment and usable by hosts. Pulp A service within Katello responsible for repository and content management. Pulp Mirror A Capsule Server component that mirrors content. Puppet The configuration management component of Satellite. Puppet Agent A service running on a host that applies configuration changes to that host. Puppet Environment An isolated set of Puppet Agent nodes that can be associated with a specific set of Puppet Modules. Puppet Manifest Refers to Puppet scripts, which are files with the .pp extension. The files contain code to define a set of necessary resources, such as packages, services, files, users and groups, and so on, using a set of key-value pairs for their attributes. Do not confuse with Manifest (Red Hat Subscription Manifest) . Puppet Server A Capsule Server component that provides Puppet Manifests to hosts for execution by the Puppet Agent. Puppet Module A self-contained bundle of code (Puppet Manifests) and data (facts) that you can use to manage resources such as users, files, and services. Recurring Logic A job executed automatically according to a schedule. In the Satellite web UI, you can view those jobs under Monitor > Recurring logics . Registry An archive of container images. Satellite supports importing images from local and external registries. Satellite itself can act as an image registry for hosts. However, hosts cannot push changes back to the registry. Repository Provides storage for a collection of content. Resource Type Refers to a part of Satellite infrastructure, for example host, capsule, or architecture. Used in permission filtering. Role Specifies a collection of permissions that are applied to a set of resources, such as hosts. Roles can be assigned to users and user groups. Satellite provides a number of predefined roles. SCAP content A file containing the configuration and security baseline against which hosts are checked. Used in compliance policies. Scenario A set of predefined settings for the Satellite CLI installer. Scenario defines the type of installation, for example to install Capsule Server execute satellite-installer --scenario capsule . Every scenario has its own answer file to store the scenario settings. Smart Proxy A Capsule Server component that can integrate with external services, such as DNS or DHCP . In upstream Foreman terminology, Smart Proxy is a synonym of Capsule. Smart Variable A configuration value used by classes in Puppet modules. Standard Operating Environment (SOE) A controlled version of the operating system on which applications are deployed. Subnet Image A type of generic image for PXE-less provisioning that communicates through Capsule Server. Subscription An entitlement for receiving content and service from Red Hat. Synchronization Refers to mirroring content from external resources into the Red Hat Satellite Library. Synchronization Plan Provides scheduled execution of content synchronization. Task A background process executed on the Satellite or Capsule Server, such as repository synchronization or content view publishing. You can monitor the task status in the Satellite web UI under Monitor > Tasks . Trend A means of tracking changes in specific parts of Satellite infrastructure. Configure trends in Satellite web UI under Monitor > Trends . User Group A collection of roles which can be assigned to a collection of users. User Anyone registered to use Red Hat Satellite. Authentication and authorization is possible through built-in logic, through external resources (LDAP, Identity Management, or Active Directory), or with Kerberos. virt-who An agent for retrieving IDs of virtual machines from the hypervisor. When used with Satellite, virt-who reports those IDs to Satellite Server so that it can provide subscriptions for hosts provisioned on virtual machines.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/appe-Architecture_Guide-Glossary_of_Terms
Chapter 2. OpenID Connect client and token propagation quickstart
Chapter 2. OpenID Connect client and token propagation quickstart Learn how to use OpenID Connect (OIDC) and OAuth2 clients with filters to get, refresh, and propagate access tokens in your applications. For more information about OIDC Client and Token Propagation support in Quarkus, see the OpenID Connect (OIDC) and OAuth2 client and filters reference guide . To protect your applications by using Bearer Token Authorization, see the OpenID Connect (OIDC) Bearer token authentication guide. 2.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) jq tool 2.2. Architecture In this example, an application is built with two Jakarta REST resources, FrontendResource and ProtectedResource . Here, FrontendResource uses one of three methods to propagate access tokens to ProtectedResource : It can get a token by using an OIDC client filter before propagating it. It can get a token by using a programmatically created OIDC client and propagate it by passing it to a REST client method as an HTTP Authorization header value. It can use an OIDC token propagation filter to propagate the incoming access token. FrontendResource has eight endpoints: /frontend/user-name-with-oidc-client-token /frontend/admin-name-with-oidc-client-token /frontend/user-name-with-oidc-client-token-header-param /frontend/admin-name-with-oidc-client-token-header-param /frontend/user-name-with-oidc-client-token-header-param-blocking /frontend/admin-name-with-oidc-client-token-header-param-blocking /frontend/user-name-with-propagated-token /frontend/admin-name-with-propagated-token When either /frontend/user-name-with-oidc-client-token or /frontend/admin-name-with-oidc-client-token endpoint is called, FrontendResource uses a REST client with an OIDC client filter to get and propagate an access token to ProtectedResource . When either /frontend/user-name-with-oidc-client-token-header-param or /frontend/admin-name-with-oidc-client-token-header-param endpoint is called, FrontendResource uses a programmatically created OIDC client to get and propagate an access token to ProtectedResource by passing it to a REST client method as an HTTP Authorization header value. When either /frontend/user-name-with-propagated-token or /frontend/admin-name-with-propagated-token endpoint is called, FrontendResource uses a REST client with OIDC Token Propagation Filter to propagate the current incoming access token to ProtectedResource . ProtectedResource has two endpoints: /protected/user-name /protected/admin-name Both endpoints return the username extracted from the incoming access token, which was propagated to ProtectedResource from FrontendResource . The only difference between these endpoints is that calling /protected/user-name is only allowed if the current access token has a user role, and calling /protected/admin-name is only allowed if the current access token has an admin role. 2.3. Solution We recommend that you follow the instructions in the sections and create the application step by step. However, you can go right to the completed example. Clone the Git repository: git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.15 , or download an archive . The solution is in the security-openid-connect-client-quickstart directory . 2.4. Creating the Maven project First, you need a new project. Create a new project with the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-client-quickstart \ --extension='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest' \ --no-code cd security-openid-connect-client-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-client-quickstart \ -Dextensions='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest' \ -DnoCode cd security-openid-connect-client-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-client-quickstart" It generates a Maven project, importing the oidc , rest-client-oidc-filter , rest-client-oidc-token-propagation , and rest extensions. If you already have your Quarkus project configured, you can add these extensions to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest' Using Gradle: ./gradlew addExtension --extensions='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest' It adds the following extensions to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-oidc-filter</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-oidc-token-propagation</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest") 2.5. Writing the application Start by implementing ProtectedResource : package org.acme.security.openid.connect.client; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.smallrye.mutiny.Uni; import org.eclipse.microprofile.jwt.JsonWebToken; @Path("/protected") @Authenticated public class ProtectedResource { @Inject JsonWebToken principal; @GET @RolesAllowed("user") @Produces("text/plain") @Path("userName") public Uni<String> userName() { return Uni.createFrom().item(principal.getName()); } @GET @RolesAllowed("admin") @Produces("text/plain") @Path("adminName") public Uni<String> adminName() { return Uni.createFrom().item(principal.getName()); } } ProtectedResource returns a name from both userName() and adminName() methods. The name is extracted from the current JsonWebToken . , add the following REST clients: RestClientWithOidcClientFilter , which uses an OIDC client filter provided by the quarkus-rest-client-oidc-filter extension to get and propagate an access token. RestClientWithTokenHeaderParam , which accepts a token already acquired by the programmatically created OidcClient as an HTTP Authorization header value. RestClientWithTokenPropagationFilter , which uses an OIDC token propagation filter provided by the quarkus-rest-client-oidc-token-propagation extension to get and propagate an access token. Add the RestClientWithOidcClientFilter REST client: package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import io.smallrye.mutiny.Uni; @RegisterRestClient @OidcClientFilter 1 @Path("/") public interface RestClientWithOidcClientFilter { @GET @Produces("text/plain") @Path("userName") Uni<String> getUserName(); @GET @Produces("text/plain") @Path("adminName") Uni<String> getAdminName(); } 1 Register an OIDC client filter with the REST client to get and propagate the tokens. Add the RestClientWithTokenHeaderParam REST client: package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.HeaderParam; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; @RegisterRestClient @Path("/") public interface RestClientWithTokenHeaderParam { @GET @Produces("text/plain") @Path("userName") Uni<String> getUserName(@HeaderParam("Authorization") String authorization); 1 @GET @Produces("text/plain") @Path("adminName") Uni<String> getAdminName(@HeaderParam("Authorization") String authorization); 2 } 1 2 RestClientWithTokenHeaderParam REST client expects that the tokens will be passed to it as HTTP Authorization header values. Add the RestClientWithTokenPropagationFilter REST client: package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessToken; import io.smallrye.mutiny.Uni; @RegisterRestClient @AccessToken 1 @Path("/") public interface RestClientWithTokenPropagationFilter { @GET @Produces("text/plain") @Path("userName") Uni<String> getUserName(); @GET @Produces("text/plain") @Path("adminName") Uni<String> getAdminName(); } 1 Register an OIDC token propagation filter with the REST client to propagate the incoming already-existing tokens. Important Do not use the RestClientWithOidcClientFilter and RestClientWithTokenPropagationFilter interfaces in the same REST client because they can conflict, leading to issues. For example, the OIDC client filter can override the token from the OIDC token propagation filter, or the propagation filter might not work correctly if it attempts to propagate a token when none is available, expecting the OIDC client filter to obtain a new token instead. Also, add OidcClientCreator to create an OIDC client programmatically at startup. OidcClientCreator supports RestClientWithTokenHeaderParam REST client calls: package org.acme.security.openid.connect.client; import java.util.Map; import org.eclipse.microprofile.config.inject.ConfigProperty; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClientConfig; import io.quarkus.oidc.client.OidcClientConfig.Grant.Type; import io.quarkus.oidc.client.OidcClients; import io.quarkus.runtime.StartupEvent; import io.smallrye.mutiny.Uni; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import jakarta.inject.Inject; @ApplicationScoped public class OidcClientCreator { @Inject OidcClients oidcClients; 1 @ConfigProperty(name = "quarkus.oidc.auth-server-url") String oidcProviderAddress; private volatile OidcClient oidcClient; public void startup(@Observes StartupEvent event) { createOidcClient().subscribe().with(client -> {oidcClient = client;}); } public OidcClient getOidcClient() { return oidcClient; } private Uni<OidcClient> createOidcClient() { OidcClientConfig cfg = new OidcClientConfig(); cfg.setId("myclient"); cfg.setAuthServerUrl(oidcProviderAddress); cfg.setClientId("backend-service"); cfg.getCredentials().setSecret("secret"); cfg.getGrant().setType(Type.PASSWORD); cfg.setGrantOptions(Map.of("password", Map.of("username", "alice", "password", "alice"))); return oidcClients.newClient(cfg); } } 1 OidcClients can be used to retrieve the already initialized, named OIDC clients and create new OIDC clients on demand. Now, finish creating the application by adding FrontendResource : package org.acme.security.openid.connect.client; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.oidc.client.Tokens; import io.quarkus.oidc.client.runtime.TokensHelper; import org.eclipse.microprofile.rest.client.inject.RestClient; import io.smallrye.mutiny.Uni; @Path("/frontend") public class FrontendResource { @Inject @RestClient RestClientWithOidcClientFilter restClientWithOidcClientFilter; 1 @Inject @RestClient RestClientWithTokenPropagationFilter restClientWithTokenPropagationFilter; 2 @Inject OidcClientCreator oidcClientCreator; TokensHelper tokenHelper = new TokensHelper(); 3 @Inject @RestClient RestClientWithTokenHeaderParam restClientWithTokenHeaderParam; 4 @GET @Path("user-name-with-oidc-client-token") @Produces("text/plain") public Uni<String> getUserNameWithOidcClientToken() { 5 return restClientWithOidcClientFilter.getUserName(); } @GET @Path("admin-name-with-oidc-client-token") @Produces("text/plain") public Uni<String> getAdminNameWithOidcClientToken() { 6 return restClientWithOidcClientFilter.getAdminName(); } @GET @Path("user-name-with-propagated-token") @Produces("text/plain") public Uni<String> getUserNameWithPropagatedToken() { 7 return restClientWithTokenPropagationFilter.getUserName(); } @GET @Path("admin-name-with-propagated-token") @Produces("text/plain") public Uni<String> getAdminNameWithPropagatedToken() { 8 return restClientWithTokenPropagationFilter.getAdminName(); } @GET @Path("user-name-with-oidc-client-token-header-param") @Produces("text/plain") public Uni<String> getUserNameWithOidcClientTokenHeaderParam() { 9 return tokenHelper.getTokens(oidcClientCreator.getOidcClient()).onItem() .transformToUni(tokens -> restClientWithTokenHeaderParam.getUserName("Bearer " + tokens.getAccessToken())); } @GET @Path("admin-name-with-oidc-client-token-header-param") @Produces("text/plain") public Uni<String> getAdminNameWithOidcClientTokenHeaderParam() { 10 return tokenHelper.getTokens(oidcClientCreator.getOidcClient()).onItem() .transformToUni(tokens -> restClientWithTokenHeaderParam.getAdminName("Bearer " + tokens.getAccessToken())); } @GET @Path("user-name-with-oidc-client-token-header-param-blocking") @Produces("text/plain") public String getUserNameWithOidcClientTokenHeaderParamBlocking() { 11 Tokens tokens = tokenHelper.getTokens(oidcClientCreator.getOidcClient()).await().indefinitely(); return restClientWithTokenHeaderParam.getUserName("Bearer " + tokens.getAccessToken()).await().indefinitely(); } @GET @Path("admin-name-with-oidc-client-token-header-param-blocking") @Produces("text/plain") public String getAdminNameWithOidcClientTokenHeaderParamBlocking() { 12 Tokens tokens = tokenHelper.getTokens(oidcClientCreator.getOidcClient()).await().indefinitely(); return restClientWithTokenHeaderParam.getAdminName("Bearer " + tokens.getAccessToken()).await().indefinitely(); } } 1 5 6 FrontendResource uses the injected RestClientWithOidcClientFilter REST client with the OIDC client filter to get and propagate an access token to ProtectedResource when either /frontend/user-name-with-oidc-client-token or /frontend/admin-name-with-oidc-client-token is called. 2 7 8 FrontendResource uses the injected RestClientWithTokenPropagationFilter REST client with the OIDC token propagation filter to propagate the current incoming access token to ProtectedResource when either /frontend/user-name-with-propagated-token or /frontend/admin-name-with-propagated-token is called. 4 9 10 FrontendResource uses the programmatically created OIDC client to get and propagate an access token to ProtectedResource by passing it directly to the injected RestClientWithTokenHeaderParam REST client's method as an HTTP Authorization header value, when either /frontend/user-name-with-oidc-client-token-header-param or /frontend/admin-name-with-oidc-client-token-header-param is called. 11 12 Sometimes, one may have to acquire tokens in a blocking manner before propagating them with the REST client. This example shows how to acquire the tokens in such cases. 3 io.quarkus.oidc.client.runtime.TokensHelper is a useful tool when OIDC client is used directly, without the OIDC client filter. To use TokensHelper , pass OIDC Client to it to get the tokens and TokensHelper acquires the tokens and refreshes them if necessary in a thread-safe way. Finally, add a Jakarta REST ExceptionMapper : package org.acme.security.openid.connect.client; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; import org.jboss.resteasy.reactive.ClientWebApplicationException; @Provider public class FrontendExceptionMapper implements ExceptionMapper<ClientWebApplicationException> { @Override public Response toResponse(ClientWebApplicationException t) { return Response.status(t.getResponse().getStatus()).build(); } } This exception mapper is only added to verify during the tests that ProtectedResource returns 403 when the token has no expected role. Without this mapper, Quarkus REST (formerly RESTEasy Reactive) would correctly convert the exceptions that escape from REST client calls to 500 to avoid leaking the information from the downstream resources such as ProtectedResource . However, in the tests, it would not be possible to assert that 500 is caused by an authorization exception instead of some internal error. 2.6. Configuring the application Having prepared the code, you configure the application: # Configure OIDC %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret # Tell Dev Services for Keycloak to import the realm file # This property is ineffective when running the application in JVM or Native modes but only in dev and test modes. quarkus.keycloak.devservices.realm-path=quarkus-realm.json # Configure OIDC Client quarkus.oidc-client.auth-server-url=USD{quarkus.oidc.auth-server-url} quarkus.oidc-client.client-id=USD{quarkus.oidc.client-id} quarkus.oidc-client.credentials.secret=USD{quarkus.oidc.credentials.secret} quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice # Configure REST clients %prod.port=8080 %dev.port=8080 %test.port=8081 org.acme.security.openid.connect.client.RestClientWithOidcClientFilter/mp-rest/url=http://localhost:USD{port}/protected org.acme.security.openid.connect.client.RestClientWithTokenHeaderParam/mp-rest/url=http://localhost:USD{port}/protected org.acme.security.openid.connect.client.RestClientWithTokenPropagationFilter/mp-rest/url=http://localhost:USD{port}/protected The preceding configuration references Keycloak, which is used by ProtectedResource to verify the incoming access tokens and by OidcClient to get the tokens for a user alice by using a password grant. Both REST clients point to ProtectedResource 's HTTP address. Note Adding a %prod. profile prefix to quarkus.oidc.auth-server-url ensures that Dev Services for Keycloak launches a container for you when the application is run in dev or test modes. For more information, see the Running the application in dev mode section. 2.7. Starting and configuring the Keycloak server Note Do not start the Keycloak server when you run the application in dev or test modes; Dev Services for Keycloak launches a container. For more information, see the Running the application in dev mode section. Ensure you put the realm configuration file on the classpath, in the target/classes directory. This placement ensures that the file is automatically imported in dev mode. However, if you have already built a complete solution , you do not need to add the realm file to the classpath because the build process has already done so. To start a Keycloak Server, you can use Docker and just run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev Set {keycloak.version} to 25.0.6 or later. You can access your Keycloak Server at localhost:8180 . Log in as the admin user to access the Keycloak Administration Console. The password is admin . Import the realm configuration file to create a new realm. For more details, see the Keycloak documentation about how to create a new realm . This quarkus realm file adds a frontend client, and alice and admin users. alice has a user role. admin has both user and admin roles. 2.8. Running the application in dev mode To run the application in a dev mode, use: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev Dev Services for Keycloak launches a Keycloak container and imports quarkus-realm.json . Open a Dev UI available at /q/dev-ui and click a Keycloak provider link in the OpenID Connect Dev UI card. When asked, log in to a Single Page Application provided by the OpenID Connect Dev UI: Log in as alice , with the password, alice . This user has both admin and user roles. Access /frontend/user-name-with-propagated-token , which returns 200 . Access /frontend/admin-name-with-propagated-token , which returns 200 . Log out and back in as bob with the password, bob . This user has a user role. Access /frontend/user-name-with-propagated-token , which returns 200 . Access /frontend/admin-name-with-propagated-token , which returns 403 . You have tested that FrontendResource can propagate the access tokens from the OpenID Connect Dev UI. 2.9. Running the application in JVM mode After exploring the application in dev mode, you can run it as a standard Java application. First, compile it: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Then, run it: java -jar target/quarkus-app/quarkus-run.jar 2.10. Running the application in native mode You can compile this demo into native code; no modifications are required. This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary and optimized to run with minimal resources. Compilation takes longer, so this step is turned off by default. To build again, enable the native profile: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.native.enabled=true After a little while, when the build finishes, you can run the native binary directly: ./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner 2.11. Testing the application For more information about testing your application in dev mode, see the preceding Running the application in dev mode section. You can test the application launched in JVM or Native modes with curl . Obtain an access token for alice : export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' \ ) Use this token to call /frontend/user-name-with-propagated-token . This command returns the 200 status code and the name alice : curl -i -X GET \ http://localhost:8080/frontend/user-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token Use the same token to call /frontend/admin-name-with-propagated-token . In contrast to the preceding command, this command returns 403 because alice has only a user role: curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token , obtain an access token for admin : export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' \ ) Use this token to call /frontend/user-name-with-propagated-token . This command returns a 200 status code and the name admin : curl -i -X GET \ http://localhost:8080/frontend/user-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token Use the same token to call /frontend/admin-name-with-propagated-token . This command also returns the 200 status code and the name admin because admin has both user and admin roles: curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token , check the FrontendResource methods, which do not propagate the existing tokens but use OidcClient to get and propagate the tokens. As already shown, OidcClient is configured to get the tokens for the alice user. curl -i -X GET \ http://localhost:8080/frontend/user-name-with-oidc-client-token This command returns the 200 status code and the name alice . curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-oidc-client-token In contrast with the preceding command, this command returns a 403 status code. , test that the programmatically created OIDC client correctly acquires and propagates the token with RestClientWithTokenHeaderParam both in reactive and imperative (blocking) modes. Call the /user-name-with-oidc-client-token-header-param . This command returns the 200 status code and the name alice : curl -i -X GET \ http://localhost:8080/frontend/user-name-with-oidc-client-token-header-param Call the /admin-name-with-oidc-client-token-header-param . In contrast with the preceding command, this command returns a 403 status code: curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-oidc-client-token-header-param , test the endpoints which use OIDC client in in the blocking mode. Call the /user-name-with-oidc-client-token-header-param-blocking . This command returns the 200 status code and the name alice : curl -i -X GET \ http://localhost:8080/frontend/user-name-with-oidc-client-token-header-param-blocking Call the /admin-name-with-oidc-client-token-header-param-blocking . In contrast with the preceding command, this command returns a 403 status code: curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-oidc-client-token-header-param-blocking 2.12. References OpenID Connect Client and Token Propagation Reference Guide OIDC Bearer token authentication Quarkus Security overview
[ "quarkus create app org.acme:security-openid-connect-client-quickstart --extension='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest' --no-code cd security-openid-connect-client-quickstart", "mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-client-quickstart -Dextensions='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest' -DnoCode cd security-openid-connect-client-quickstart", "quarkus extension add oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest", "./mvnw quarkus:add-extension -Dextensions='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest'", "./gradlew addExtension --extensions='oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest'", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-oidc-filter</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-oidc-token-propagation</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest</artifactId> </dependency>", "implementation(\"io.quarkus:quarkus-oidc,rest-client-oidc-filter,rest-client-oidc-token-propagation,rest\")", "package org.acme.security.openid.connect.client; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.smallrye.mutiny.Uni; import org.eclipse.microprofile.jwt.JsonWebToken; @Path(\"/protected\") @Authenticated public class ProtectedResource { @Inject JsonWebToken principal; @GET @RolesAllowed(\"user\") @Produces(\"text/plain\") @Path(\"userName\") public Uni<String> userName() { return Uni.createFrom().item(principal.getName()); } @GET @RolesAllowed(\"admin\") @Produces(\"text/plain\") @Path(\"adminName\") public Uni<String> adminName() { return Uni.createFrom().item(principal.getName()); } }", "package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import io.smallrye.mutiny.Uni; @RegisterRestClient @OidcClientFilter 1 @Path(\"/\") public interface RestClientWithOidcClientFilter { @GET @Produces(\"text/plain\") @Path(\"userName\") Uni<String> getUserName(); @GET @Produces(\"text/plain\") @Path(\"adminName\") Uni<String> getAdminName(); }", "package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.HeaderParam; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; @RegisterRestClient @Path(\"/\") public interface RestClientWithTokenHeaderParam { @GET @Produces(\"text/plain\") @Path(\"userName\") Uni<String> getUserName(@HeaderParam(\"Authorization\") String authorization); 1 @GET @Produces(\"text/plain\") @Path(\"adminName\") Uni<String> getAdminName(@HeaderParam(\"Authorization\") String authorization); 2 }", "package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessToken; import io.smallrye.mutiny.Uni; @RegisterRestClient @AccessToken 1 @Path(\"/\") public interface RestClientWithTokenPropagationFilter { @GET @Produces(\"text/plain\") @Path(\"userName\") Uni<String> getUserName(); @GET @Produces(\"text/plain\") @Path(\"adminName\") Uni<String> getAdminName(); }", "package org.acme.security.openid.connect.client; import java.util.Map; import org.eclipse.microprofile.config.inject.ConfigProperty; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClientConfig; import io.quarkus.oidc.client.OidcClientConfig.Grant.Type; import io.quarkus.oidc.client.OidcClients; import io.quarkus.runtime.StartupEvent; import io.smallrye.mutiny.Uni; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import jakarta.inject.Inject; @ApplicationScoped public class OidcClientCreator { @Inject OidcClients oidcClients; 1 @ConfigProperty(name = \"quarkus.oidc.auth-server-url\") String oidcProviderAddress; private volatile OidcClient oidcClient; public void startup(@Observes StartupEvent event) { createOidcClient().subscribe().with(client -> {oidcClient = client;}); } public OidcClient getOidcClient() { return oidcClient; } private Uni<OidcClient> createOidcClient() { OidcClientConfig cfg = new OidcClientConfig(); cfg.setId(\"myclient\"); cfg.setAuthServerUrl(oidcProviderAddress); cfg.setClientId(\"backend-service\"); cfg.getCredentials().setSecret(\"secret\"); cfg.getGrant().setType(Type.PASSWORD); cfg.setGrantOptions(Map.of(\"password\", Map.of(\"username\", \"alice\", \"password\", \"alice\"))); return oidcClients.newClient(cfg); } }", "package org.acme.security.openid.connect.client; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.oidc.client.Tokens; import io.quarkus.oidc.client.runtime.TokensHelper; import org.eclipse.microprofile.rest.client.inject.RestClient; import io.smallrye.mutiny.Uni; @Path(\"/frontend\") public class FrontendResource { @Inject @RestClient RestClientWithOidcClientFilter restClientWithOidcClientFilter; 1 @Inject @RestClient RestClientWithTokenPropagationFilter restClientWithTokenPropagationFilter; 2 @Inject OidcClientCreator oidcClientCreator; TokensHelper tokenHelper = new TokensHelper(); 3 @Inject @RestClient RestClientWithTokenHeaderParam restClientWithTokenHeaderParam; 4 @GET @Path(\"user-name-with-oidc-client-token\") @Produces(\"text/plain\") public Uni<String> getUserNameWithOidcClientToken() { 5 return restClientWithOidcClientFilter.getUserName(); } @GET @Path(\"admin-name-with-oidc-client-token\") @Produces(\"text/plain\") public Uni<String> getAdminNameWithOidcClientToken() { 6 return restClientWithOidcClientFilter.getAdminName(); } @GET @Path(\"user-name-with-propagated-token\") @Produces(\"text/plain\") public Uni<String> getUserNameWithPropagatedToken() { 7 return restClientWithTokenPropagationFilter.getUserName(); } @GET @Path(\"admin-name-with-propagated-token\") @Produces(\"text/plain\") public Uni<String> getAdminNameWithPropagatedToken() { 8 return restClientWithTokenPropagationFilter.getAdminName(); } @GET @Path(\"user-name-with-oidc-client-token-header-param\") @Produces(\"text/plain\") public Uni<String> getUserNameWithOidcClientTokenHeaderParam() { 9 return tokenHelper.getTokens(oidcClientCreator.getOidcClient()).onItem() .transformToUni(tokens -> restClientWithTokenHeaderParam.getUserName(\"Bearer \" + tokens.getAccessToken())); } @GET @Path(\"admin-name-with-oidc-client-token-header-param\") @Produces(\"text/plain\") public Uni<String> getAdminNameWithOidcClientTokenHeaderParam() { 10 return tokenHelper.getTokens(oidcClientCreator.getOidcClient()).onItem() .transformToUni(tokens -> restClientWithTokenHeaderParam.getAdminName(\"Bearer \" + tokens.getAccessToken())); } @GET @Path(\"user-name-with-oidc-client-token-header-param-blocking\") @Produces(\"text/plain\") public String getUserNameWithOidcClientTokenHeaderParamBlocking() { 11 Tokens tokens = tokenHelper.getTokens(oidcClientCreator.getOidcClient()).await().indefinitely(); return restClientWithTokenHeaderParam.getUserName(\"Bearer \" + tokens.getAccessToken()).await().indefinitely(); } @GET @Path(\"admin-name-with-oidc-client-token-header-param-blocking\") @Produces(\"text/plain\") public String getAdminNameWithOidcClientTokenHeaderParamBlocking() { 12 Tokens tokens = tokenHelper.getTokens(oidcClientCreator.getOidcClient()).await().indefinitely(); return restClientWithTokenHeaderParam.getAdminName(\"Bearer \" + tokens.getAccessToken()).await().indefinitely(); } }", "package org.acme.security.openid.connect.client; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; import org.jboss.resteasy.reactive.ClientWebApplicationException; @Provider public class FrontendExceptionMapper implements ExceptionMapper<ClientWebApplicationException> { @Override public Response toResponse(ClientWebApplicationException t) { return Response.status(t.getResponse().getStatus()).build(); } }", "Configure OIDC %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret Tell Dev Services for Keycloak to import the realm file This property is ineffective when running the application in JVM or Native modes but only in dev and test modes. quarkus.keycloak.devservices.realm-path=quarkus-realm.json Configure OIDC Client quarkus.oidc-client.auth-server-url=USD{quarkus.oidc.auth-server-url} quarkus.oidc-client.client-id=USD{quarkus.oidc.client-id} quarkus.oidc-client.credentials.secret=USD{quarkus.oidc.credentials.secret} quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice Configure REST clients %prod.port=8080 %dev.port=8080 %test.port=8081 org.acme.security.openid.connect.client.RestClientWithOidcClientFilter/mp-rest/url=http://localhost:USD{port}/protected org.acme.security.openid.connect.client.RestClientWithTokenHeaderParam/mp-rest/url=http://localhost:USD{port}/protected org.acme.security.openid.connect.client.RestClientWithTokenPropagationFilter/mp-rest/url=http://localhost:USD{port}/protected", "docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev", "quarkus dev", "./mvnw quarkus:dev", "./gradlew --console=plain quarkusDev", "quarkus build", "./mvnw install", "./gradlew build", "java -jar target/quarkus-app/quarkus-run.jar", "quarkus build --native", "./mvnw install -Dnative", "./gradlew build -Dquarkus.native.enabled=true", "./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner", "export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' )", "curl -i -X GET http://localhost:8080/frontend/user-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token", "curl -i -X GET http://localhost:8080/frontend/admin-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token", "export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' )", "curl -i -X GET http://localhost:8080/frontend/user-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token", "curl -i -X GET http://localhost:8080/frontend/admin-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token", "curl -i -X GET http://localhost:8080/frontend/user-name-with-oidc-client-token", "curl -i -X GET http://localhost:8080/frontend/admin-name-with-oidc-client-token", "curl -i -X GET http://localhost:8080/frontend/user-name-with-oidc-client-token-header-param", "curl -i -X GET http://localhost:8080/frontend/admin-name-with-oidc-client-token-header-param", "curl -i -X GET http://localhost:8080/frontend/user-name-with-oidc-client-token-header-param-blocking", "curl -i -X GET http://localhost:8080/frontend/admin-name-with-oidc-client-token-header-param-blocking" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_client_and_token_propagation/security-openid-connect-client
Chapter 21. Atomic Host and Containers
Chapter 21. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/atomic_host_and_containers
Chapter 1. Support policy
Chapter 1. Support policy Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.432/openjdk8-support-policy
Appendix B. Using Red Hat Maven repositories
Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat
[ "/home/ <username> /.m2/settings.xml", "C:\\Users\\<username>\\.m2\\settings.xml", "<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>", "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>", "<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/using_red_hat_maven_repositories
Chapter 2. Architecture of OpenShift Data Foundation
Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see OpenShift Container Platform - Installation process . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. In order for Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly pre-existing. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/odf-architecture_rhodf
Chapter 9. Designing a Secure Directory
Chapter 9. Designing a Secure Directory How the data in Red Hat Directory Server are secured affects all of the design areas. Any security design needs to protect the data contained by the directory and meet the security and privacy needs of the users and applications. This chapter describes how to analyze the security needs and explains how to design the directory to meet these needs. 9.1. About Security Threats There are many potential threats to the security of the directory. Understanding the most common threats helps outline the overall security design. Threats to directory security fall into three main categories: Unauthorized access Unauthorized tampering Denial of service 9.1.1. Unauthorized Access Protecting the directory from unauthorized access may seem straightforward, but implementing a secure solution may be more complex than it first appears. A number of potential access points exist on the directory information delivery path where an unauthorized client may gain access to data. For example, an unauthorized client can use another client's credentials to access the data. This is particularly likely when the directory uses unprotected passwords. An unauthorized client can also eavesdrop on the information exchanged between a legitimate client and Directory Server. Unauthorized access can occur from inside the company or, if the company is connected to an extranet or to the Internet, from outside the company. The following scenarios describe just a few examples of how an unauthorized client might access the directory data. The authentication methods, password policies, and access control mechanisms provided by the Directory Server offer efficient ways of preventing unauthorized access. See the following sections for more information: Section 9.4, "Selecting Appropriate Authentication Methods" Section 9.6, "Designing a Password Policy" Section 9.7, "Designing Access Control" 9.1.2. Unauthorized Tampering If intruders gain access to the directory or intercept communications between Directory Server and a client application, they have the potential to modify (or tamper with) the directory data. The directory service is useless if the data can no longer be trusted by clients or if the directory itself cannot trust the modifications and queries it receives from clients. For example, if the directory cannot detect tampering, an attacker could change a client's request to the server (or not forward it) and change the server's response to the client. TLS and similar technologies can solve this problem by signing information at either end of the connection. For more information about using TLS with Directory Server, see Section 9.9, "Securing Server Connections" . 9.1.3. Denial of Service In a denial of service attack, the attacker's goal is to prevent the directory from providing service to its clients. For example, an attacker might use all of the system's resources, thereby preventing these resources from being used by anyone else. Directory Server can prevent denial of service attacks by setting limits on the resources allocated to a particular bind DN. For more information about setting resource limits based on the user's bind DN, see the "User Account Management" chapter in the Red Hat Directory Server Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_a_Secure_Directory
Chapter 4. Creating a standalone broker
Chapter 4. Creating a standalone broker You can get started quickly with AMQ Broker by creating a standalone broker instance on your local machine, starting it, and producing and consuming some test messages. Prerequisites AMQ Broker must be installed. For more information, see Chapter 3, Installing AMQ Broker . 4.1. Creating a broker instance A broker instance is a directory containing the configuration and runtime data for a broker. To create a new broker instance, you first create a directory for the broker instance, and then use the artemis create command to create the broker instance. This procedure demonstrates how to create a simple, standalone broker on your local machine. The broker uses a basic, default configuration, and accepts connections from clients using any of the supported messaging protocols. Procedure Create a directory for the broker instance. If you are using... Do this... Red Hat Enterprise Linux Create a new directory to serve as the location for the broker instance. USD sudo mkdir /var/opt/amq-broker Assign the user that you created during installation. USD sudo chown -R amq-broker:amq-broker /var/opt/amq-broker Windows Use Windows Explorer to create a new folder to serve as the location for the broker instance. Use the artemis create command to create the broker. If you are using... Do this... Red Hat Enterprise Linux Switch to the user account you created during installation. USD su - amq-broker Change to the directory you just created for the broker instance. USD cd /var/opt/amq-broker From the broker instance's directory, create the broker instance. USD <install-dir> /bin/artemis create mybroker Windows Open a command prompt from the directory you just created for the broker instance. From the broker instance's directory, create the broker instance. > <install-dir> \bin\artemis.cmd create mybroker Follow the artemis create prompts to configure the broker instance. Example 4.1. Configuring a broker instance using artemis create USD /opt/redhat/amq-broker/bin/artemis create mybroker Creating ActiveMQ Artemis instance at: /var/opt/amq-broker/mybroker --user: is mandatory with this configuration: Please provide the default username: admin --password: is mandatory with this configuration: Please provide the default password: --role: is mandatory with this configuration: Please provide the default role: amq --allow-anonymous | --require-login: is mandatory with this configuration: Allow anonymous access? (Y/N): Y Auto tuning journal ... done! Your system can make 19.23 writes per millisecond, your journal-buffer-timeout will be 52000 You can now start the broker by executing: "/var/opt/amq-broker/mybroker/bin/artemis" run Or you can run the broker in the background using: "/var/opt/amq-broker/mybroker/bin/artemis-service" start 4.2. Starting the broker instance After the broker instance is created, you use the artemis run command to start it. Procedure Switch to the user account you created during installation. USD su - amq-broker Use the artemis run command to start the broker instance. The broker starts and displays log output with the following information: The location of the transaction logs and cluster configuration. The type of journal being used for message persistence (AIO in this case). The URI(s) that can accept client connections. By default, port 61616 can accept connections from any of the supported protocols (CORE, MQTT, AMQP, STOMP, HORNETQ, and OPENWIRE). There are separate, individual ports for each protocol as well. The web console is available at http://localhost:8161 . The Jolokia service (JMX over REST) is available at http://localhost:8161/jolokia . 4.3. Producing and consuming test messages After starting the broker, you should verify that it is running properly. This involves producing a few test messages, sending them to the broker, and then consuming them. Procedure Use the artemis producer command to produce a few test messages and send them to the broker. This command sends 100 messages to the helloworld address, which is created automatically on the broker. The producer connects to the broker by using the default port 61616, which accepts all supported messaging protocols. USD /opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis producer --destination helloworld --message-count 100 --url tcp://localhost:61616 Producer ActiveMQQueue[helloworld], thread=0 Started to calculate elapsed time ... Producer ActiveMQQueue[helloworld], thread=0 Produced: 100 messages Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in second : 1 s Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in milli second : 1289 milli seconds Use the web console to see the messages stored in the broker. In a web browser, navigate to http://localhost:8161 . Log into the console using the default username and default password that you created when you created the broker instance. The Attributes tab is displayed. On the Attributes tab, navigate to menu:[addresses > helloworld > queues > "anycast" > helloworld]. In the step, you sent messages to the helloworld address. This created a new anycast helloworld address with a queue (also named helloworld ). The Message count attribute shows that all 100 messages that were sent to helloworld are currently stored in this queue. Figure 4.1. Message count Use the artemis consumer command to consume 50 of the messages stored on the broker. This command consumes 50 of the messages that you sent to the broker previously. USD /opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished In the web console, verify that the Message count is now 50. 50 of the messages were consumed, which leaves 50 messages stored in the helloworld queue. Stop the broker and verify that the 50 remaining messages are still stored in the helloworld queue. In the terminal in which the broker is running, press Ctrl + C to stop the broker. Restart the broker. USD /var/opt/amq-broker/mybroker/bin/artemis run In the web console, navigate back to the helloworld queue and verify that there are still 50 messages stored in the queue. Consume the remaining 50 messages. USD /opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished In the web console, verify that the Message count is 0. All of the messages stored in the helloworld queue were consumed, and the queue is now empty. 4.4. Stopping the broker instance After creating the standalone broker and producing and consuming test messages, you can stop the broker instance. This procedure manually stops the broker, which forcefully closes all client connections. In a production environment, you should configure the broker to stop gracefully so that client connections can be closed properly. Procedure Use the artemis stop command to stop the broker instance: USD /var/opt/amq-broker/mybroker/bin/artemis stop 2018-12-03 14:37:30,630 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [b6c244ef-f1cb-11e8-a2d7-0800271b03bd] stopped, uptime 35 minutes Server stopped!
[ "sudo mkdir /var/opt/amq-broker", "sudo chown -R amq-broker:amq-broker /var/opt/amq-broker", "su - amq-broker", "cd /var/opt/amq-broker", "<install-dir> /bin/artemis create mybroker", "> <install-dir> \\bin\\artemis.cmd create mybroker", "/opt/redhat/amq-broker/bin/artemis create mybroker Creating ActiveMQ Artemis instance at: /var/opt/amq-broker/mybroker --user: is mandatory with this configuration: Please provide the default username: admin --password: is mandatory with this configuration: Please provide the default password: --role: is mandatory with this configuration: Please provide the default role: amq --allow-anonymous | --require-login: is mandatory with this configuration: Allow anonymous access? (Y/N): Y Auto tuning journal done! Your system can make 19.23 writes per millisecond, your journal-buffer-timeout will be 52000 You can now start the broker by executing: \"/var/opt/amq-broker/mybroker/bin/artemis\" run Or you can run the broker in the background using: \"/var/opt/amq-broker/mybroker/bin/artemis-service\" start", "su - amq-broker", "/var/opt/amq-broker/mybroker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat JBoss AMQ 7.2.1.GA 10:53:43,959 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 10:53:44,076 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging) 10:53:44,099 INFO [org.apache.activemq.artemis.core.server] AMQ221012: Using AIO Journal", "/opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis producer --destination helloworld --message-count 100 --url tcp://localhost:61616 Producer ActiveMQQueue[helloworld], thread=0 Started to calculate elapsed time Producer ActiveMQQueue[helloworld], thread=0 Produced: 100 messages Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in second : 1 s Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in milli second : 1289 milli seconds", "/opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished", "/var/opt/amq-broker/mybroker/bin/artemis run", "/opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished", "/var/opt/amq-broker/mybroker/bin/artemis stop 2018-12-03 14:37:30,630 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [b6c244ef-f1cb-11e8-a2d7-0800271b03bd] stopped, uptime 35 minutes Server stopped!" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/getting_started_with_amq_broker/creating-standalone-getting-started
Chapter 2. Secure applications and services with OpenID Connect
Chapter 2. Secure applications and services with OpenID Connect 2.1. Available Endpoints As a fully-compliant OpenID Connect Provider implementation, Red Hat build of Keycloak exposes a set of endpoints that applications and services can use to authenticate and authorize their users. This section describes some of the key endpoints that your application and service should use when interacting with Red Hat build of Keycloak. 2.1.1. Endpoints The most important endpoint to understand is the well-known configuration endpoint. It lists endpoints and other configuration options relevant to the OpenID Connect implementation in Red Hat build of Keycloak. The endpoint is: To obtain the full URL, add the base URL for Red Hat build of Keycloak and replace {realm-name} with the name of your realm. For example: http://localhost:8080/realms/master/.well-known/openid-configuration Some RP libraries retrieve all required endpoints from this endpoint, but for others you might need to list the endpoints individually. 2.1.1.1. Authorization endpoint The authorization endpoint performs authentication of the end-user. This authentication is done by redirecting the user agent to this endpoint. For more details see the Authorization Endpoint section in the OpenID Connect specification. 2.1.1.2. Token endpoint The token endpoint is used to obtain tokens. Tokens can either be obtained by exchanging an authorization code or by supplying credentials directly depending on what flow is used. The token endpoint is also used to obtain new access tokens when they expire. For more details, see the Token Endpoint section in the OpenID Connect specification. 2.1.1.3. Userinfo endpoint The userinfo endpoint returns standard claims about the authenticated user; this endpoint is protected by a bearer token. For more details, see the Userinfo Endpoint section in the OpenID Connect specification. 2.1.1.4. Logout endpoint The logout endpoint logs out the authenticated user. The user agent can be redirected to the endpoint, which causes the active user session to be logged out. The user agent is then redirected back to the application. The endpoint can also be invoked directly by the application. To invoke this endpoint directly, the refresh token needs to be included as well as the credentials required to authenticate the client. 2.1.1.5. Certificate endpoint The certificate endpoint returns the public keys enabled by the realm, encoded as a JSON Web Key (JWK). Depending on the realm settings, one or more keys can be enabled for verifying tokens. For more information, see the Server Administration Guide and the JSON Web Key specification . 2.1.1.6. Introspection endpoint The introspection endpoint is used to retrieve the active state of a token. In other words, you can use it to validate an access or refresh token. This endpoint can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Introspection specification . 2.1.1.6.1. Introspection endpoint triggered with application/jwt header You can invoke an introspection endpoint with the HTTP header Accept: application/jwt instead of Accept: application/json . In case of application/jwt , the response may contain the additional claim jwt with the full JWT access token, which can be useful especially if the token to be introspected was a lightweight access token . This requires that you enable Support JWT claim in Introspection Response on the client advanced settings, which triggers the token introspection. 2.1.1.7. Dynamic Client Registration endpoint The dynamic client registration endpoint is used to dynamically register clients. For more details, see the <@links.securingapps id="client-registration" /> chapter and the OpenID Connect Dynamic Client Registration specification . 2.1.1.8. Token Revocation endpoint The token revocation endpoint is used to revoke tokens. Both refresh tokens and access tokens are supported by this endpoint. When revoking a refresh token, the user consent for the corresponding client is also revoked. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Revocation specification . 2.1.1.9. Device Authorization endpoint The device authorization endpoint is used to obtain a device code and a user code. It can be invoked by confidential or public clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Device Authorization Grant specification . 2.1.1.10. Backchannel Authentication endpoint The backchannel authentication endpoint is used to obtain an auth_req_id that identifies the authentication request made by the client. It can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also refer to other places of Red Hat build of Keycloak documentation like Client Initiated Backchannel Authentication Grant section of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. 2.2. Supported Grant Types This section describes the different grant types available to relaying parties. 2.2.1. Authorization code The Authorization Code flow redirects the user agent to Red Hat build of Keycloak. Once the user has successfully authenticated with Red Hat build of Keycloak, an Authorization Code is created and the user agent is redirected back to the application. The application then uses the authorization code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak. The flow is targeted towards web applications, but is also recommended for native applications, including mobile applications, where it is possible to embed a user agent. For more details refer to the Authorization Code Flow in the OpenID Connect specification. 2.2.2. Implicit The Implicit flow works similarly to the Authorization Code flow, but instead of returning an Authorization Code, the Access Token and ID Token is returned. This approach reduces the need for the extra invocation to exchange the Authorization Code for an Access Token. However, it does not include a Refresh Token. This results in the need to permit Access Tokens with a long expiration; however, that approach is not practical because it is very hard to invalidate these tokens. Alternatively, you can require a new redirect to obtain a new Access Token once the initial Access Token has expired. The Implicit flow is useful if the application only wants to authenticate the user and deals with logout itself. You can instead use a Hybrid flow where both the Access Token and an Authorization Code are returned. One thing to note is that both the Implicit flow and Hybrid flow have potential security risks as the Access Token may be leaked through web server logs and browser history. You can somewhat mitigate this problem by using short expiration for Access Tokens. For more details, see the Implicit Flow in the OpenID Connect specification. Per current OAuth 2.0 Security Best Current Practice , this flow should not be used. This flow is removed from the future OAuth 2.1 specification . 2.2.3. Resource Owner Password Credentials Resource Owner Password Credentials, referred to as Direct Grant in Red Hat build of Keycloak, allows exchanging user credentials for tokens. Per current OAuth 2.0 Security Best Practices , this flow should not be used, preferring alternative methods such as Section 2.2.5, "Device Authorization Grant" or Section 2.2.1, "Authorization code" . The limitations of using this flow include: User credentials are exposed to the application Applications need login pages Application needs to be aware of the authentication scheme Changes to authentication flow requires changes to application No support for identity brokering or social login Flows are not supported (user self-registration, required actions, and so on.) Security concerns with this flow include: Involving more than Red Hat build of Keycloak in handling of credentials Increased vulnerable surface area where credential leaks can happen Creating an ecosystem where users trust another application for entering their credentials and not Red Hat build of Keycloak For a client to be permitted to use the Resource Owner Password Credentials grant, the client has to have the Direct Access Grants Enabled option enabled. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. It is removed from the future OAuth 2.1 specification . For more details, see the Resource Owner Password Credentials Grant chapter in the OAuth 2.0 specification. 2.2.3.1. Example using CURL The following example shows how to obtain an access token for a user in the realm master with username user and password password . The example is using the confidential client myclient : curl \ -d "client_id=myclient" \ -d "client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578" \ -d "username=user" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" 2.2.4. Client credentials Client Credentials are used when clients (applications and services) want to obtain access on behalf of themselves rather than on behalf of a user. For example, these credentials can be useful for background services that apply changes to the system in general rather than for a specific user. Red Hat build of Keycloak provides support for clients to authenticate either with a secret or with public/private keys. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. For more details, see the Client Credentials Grant chapter in the OAuth 2.0 specification. 2.2.5. Device Authorization Grant Device Authorization Grant is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. The application requests that Red Hat build of Keycloak provide a device code and a user code. Red Hat build of Keycloak creates a device code and a user code. Red Hat build of Keycloak returns a response including the device code and the user code to the application. The application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. The application repeatedly polls Red Hat build of Keycloak until Red Hat build of Keycloak completes the user authorization. If user authentication is complete, the application obtains the device code. The application uses the device code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak. For more details, see the OAuth 2.0 Device Authorization Grant specification . 2.2.6. Client Initiated Backchannel Authentication Grant Client Initiated Backchannel Authentication Grant is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. The client requests from Red Hat build of Keycloak an auth_req_id that identifies the authentication request made by the client. Red Hat build of Keycloak creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat build of Keycloak to obtain an Access Token, Refresh Token, and ID Token from Red Hat build of Keycloak in return for the auth_req_id until the user is authenticated. In case that client uses ping mode, it does not need to repeatedly poll the token endpoint, but it can wait for the notification sent by Red Hat build of Keycloak to the specified Client Notification Endpoint. The Client Notification Endpoint can be configured in the Red Hat build of Keycloak Admin Console. The details of the contract for Client Notification Endpoint are described in the CIBA specification. For more details, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also refer to other places of Red Hat build of Keycloak documentation such as Backchannel Authentication Endpoint of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. For the details about FAPI CIBA compliance, see the FAPI section of this guide . 2.3. Red Hat build of Keycloak specific errors Red Hat build of Keycloak server can send errors to the client application in the OIDC authentication response with parameters error=temporarily_unavailable and error_description=authentication_expired . Red Hat build of Keycloak sends this error when a user is authenticated and has an SSO session, but the authentication session expired in the current browser tab and hence the Red Hat build of Keycloak server cannot automatically do SSO re-authentication of the user and redirect back to client with a successful response. When a client application receives this type of error, it is ideal to retry authentication immediately and send a new OIDC authentication request to the Red Hat build of Keycloak server, which should typically always authenticate the user due to the SSO session and redirect back. For more details, see the Server Administration Guide . 2.4. Financial-grade API (FAPI) Support Red Hat build of Keycloak makes it easier for administrators to make sure that their clients are compliant with these specifications: Financial-grade API Security Profile 1.0 - Part 1: Baseline Financial-grade API Security Profile 1.0 - Part 2: Advanced Financial-grade API: Client Initiated Backchannel Authentication Profile (FAPI CIBA) FAPI 2.0 Security Profile (Draft) FAPI 2.0 Message Signing (Draft) This compliance means that the Red Hat build of Keycloak server will verify the requirements for the authorization server, which are mentioned in the specifications. Red Hat build of Keycloak adapters do not have any specific support for the FAPI, hence the required validations on the client (application) side may need to be still done manually or through some other third-party solutions. 2.4.1. FAPI client profiles To make sure that your clients are FAPI compliant, you can configure Client Policies in your realm as described in the Server Administration Guide and link them to the global client profiles for FAPI support, which are automatically available in each realm. You can use either fapi-1-baseline or fapi-1-advanced profile based on which FAPI profile you need your clients to conform with. You can use also profiles fapi-2-security-profile or fapi-2-message-signing for the compliance with FAPI 2 Draft specifications. In case you want to use Pushed Authorization Request (PAR) , it is recommended that your client use both the fapi-1-baseline profile and fapi-1-advanced for PAR requests. Specifically, the fapi-1-baseline profile contains pkce-enforcer executor, which makes sure that client use PKCE with secured S256 algorithm. This is not required for FAPI Advanced clients unless they use PAR requests. In case you want to use CIBA in a FAPI compliant way, make sure that your clients use both fapi-1-advanced and fapi-ciba client profiles. There is a need to use the fapi-1-advanced profile, or other client profile containing the requested executors, as the fapi-ciba profile contains just CIBA-specific executors. When enforcing the requirements of the FAPI CIBA specification, there is a need for more requirements, such as enforcement of confidential clients or certificate-bound access tokens. 2.4.2. Open Finance Brasil Financial-grade API Security Profile Red Hat build of Keycloak is compliant with the Open Finance Brasil Financial-grade API Security Profile 1.0 Implementers Draft 3 . This one is stricter in some requirements than the FAPI 1 Advanced specification and hence it may be needed to configure Client Policies in the more strict way to enforce some of the requirements. Especially: If your client does not use PAR, make sure that it uses encrypted OIDC request objects. This can be achieved by using a client profile with the secure-request-object executor configured with Encryption Required enabled. Make sure that for JWS, the client uses the PS256 algorithm. For JWE, the client should use the RSA-OAEP with A256GCM . This may need to be set in all the Client Settings where these algorithms are applicable. 2.4.3. Australia Consumer Data Right (CDR) Security Profile Red Hat build of Keycloak is compliant with the Australia Consumer Data Right Security Profile . If you want to apply the Australia CDR security profile, you need to use fapi-1-advanced profile because the Australia CDR security profile is based on FAPI 1.0 Advanced security profile. If your client also applies PAR, make sure that client applies RFC 7637 Proof Key for Code Exchange (PKCE) because the Australia CDR security profile requires that you apply PKCE when applying PAR. This can be achieved by using a client profile with the pkce-enforcer executor. 2.4.4. TLS considerations As confidential information is being exchanged, all interactions shall be encrypted with TLS (HTTPS). Moreover, there are some requirements in the FAPI specification for the cipher suites and TLS protocol versions used. To match these requirements, you can consider configure allowed ciphers. This configuration can be done by setting the https-protocols and https-cipher-suites options. Red Hat build of Keycloak uses TLSv1.3 by default and hence it is possibly not needed to change the default settings. However it may be needed to adjust ciphers if you need to fall back to lower TLS version for some reason. For more details, see Configuring TLS chapter. 2.5. OAuth 2.1 Support Red Hat build of Keycloak makes it easier for administrators to make sure that their clients are compliant with these specifications: The OAuth 2.1 Authorization Framework - draft specification This compliance means that the Red Hat build of Keycloak server will verify the requirements for the authorization server, which are mentioned in the specifications. Red Hat build of Keycloak adapters do not have any specific support for the OAuth 2.1, hence the required validations on the client (application) side may need to be still done manually or through some other third-party solutions. 2.5.1. OAuth 2.1 client profiles To make sure that your clients are OAuth 2.1 compliant, you can configure Client Policies in your realm as described in the Server Administration Guide and link them to the global client profiles for OAuth 2.1 support, which are automatically available in each realm. You can use either oauth-2-1-for-confidential-client profile for confidential clients or oauth-2-1-for-public-client profile for public clients. Note OAuth 2.1 specification is still a draft and it may change in the future. Hence the Red Hat build of Keycloak built-in OAuth 2.1 client profiles can change as well. Note When using OAuth 2.1 profile for public clients, it is recommended to use DPoP preview feature as described in the Server Administration Guide because DPoP binds an access token and a refresh token together with the public part of a client's key pair. This binding prevents an attacker from using stolen tokens. 2.6. Recommendations This section describes some recommendations when securing your applications with Red Hat build of Keycloak. 2.6.1. Validating access tokens If you need to manually validate access tokens issued by Red Hat build of Keycloak, you can invoke the Introspection Endpoint . The downside to this approach is that you have to make a network invocation to the Red Hat build of Keycloak server. This can be slow and possibly overload the server if you have too many validation requests going on at the same time. Red Hat build of Keycloak issued access tokens are JSON Web Tokens (JWT) digitally signed and encoded using JSON Web Signature (JWS) . Because they are encoded in this way, you can locally validate access tokens using the public key of the issuing realm. You can either hard code the realm's public key in your validation code, or lookup and cache the public key using the certificate endpoint with the Key ID (KID) embedded within the JWS. Depending on what language you code in, many third party libraries exist and they can help you with JWS validation. 2.6.2. Redirect URIs When using the redirect based flows, be sure to use valid redirect uris for your clients. The redirect uris should be as specific as possible. This especially applies to client-side (public clients) applications. Failing to do so could result in: Open redirects - this can allow attackers to create spoof links that looks like they are coming from your domain Unauthorized entry - when users are already authenticated with Red Hat build of Keycloak, an attacker can use a public client where redirect uris have not be configured correctly to gain access by redirecting the user without the users knowledge In production for web applications always use https for all redirect URIs. Do not allow redirects to http. A few special redirect URIs also exist: http://127.0.0.1 This redirect URI is useful for native applications and allows the native application to create a web server on a random port that can be used to obtain the authorization code. This redirect uri allows any port. Note that per OAuth 2.0 for Native Apps , the use of localhost is not recommended and the IP literal 127.0.0.1 should be used instead. urn:ietf:wg:oauth:2.0:oob If you cannot start a web server in the client (or a browser is not available), you can use the special urn:ietf:wg:oauth:2.0:oob redirect uri. When this redirect uri is used, Red Hat build of Keycloak displays a page with the code in the title and in a box on the page. The application can either detect that the browser title has changed, or the user can copy and paste the code manually to the application. With this redirect uri, a user can use a different device to obtain a code to paste back to the application.
[ "/realms/{realm-name}/.well-known/openid-configuration", "/realms/{realm-name}/protocol/openid-connect/auth", "/realms/{realm-name}/protocol/openid-connect/token", "/realms/{realm-name}/protocol/openid-connect/userinfo", "/realms/{realm-name}/protocol/openid-connect/logout", "/realms/{realm-name}/protocol/openid-connect/certs", "/realms/{realm-name}/protocol/openid-connect/token/introspect", "/realms/{realm-name}/clients-registrations/openid-connect", "/realms/{realm-name}/protocol/openid-connect/revoke", "/realms/{realm-name}/protocol/openid-connect/auth/device", "/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth", "curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/oidc-layers-
Chapter 3. Prerequisites for the Migration Toolkit for Applications installation
Chapter 3. Prerequisites for the Migration Toolkit for Applications installation The following are the prerequisites for the Migration Toolkit for Applications (MTA) installation: Java Development Kit (JDK) is installed. MTA supports the following JDKs: OpenJDK 11 OpenJDK 17 Oracle JDK 11 Oracle JDK 17 Eclipse TemurinTM JDK 11 Eclipse TemurinTM JDK 17 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/introduction_to_the_migration_toolkit_for_applications/prerequisites_for_the_migration_toolkit_for_applications_installation
Chapter 8. Using ID Views in Active Directory Environments
Chapter 8. Using ID Views in Active Directory Environments ID views enable you to specify new values for POSIX user or group attributes, as well as to define on which client host or hosts the new values will apply. Integration systems other than Identity Management (IdM) sometimes generate UID and GID values based on an algorithm different than the algorithm used in IdM. By overriding the previously generated values to make them compliant with the values used in IdM, a client that used to be a member of another integration system can be fully integrated with IdM. Note This chapter only describes ID views functionality related to Active Directory (AD). For general information about ID views, see the Linux Domain Identity, Authentication, and Policy Guide . You can use ID views in AD environments for the following purposes: Overriding AD User Attributes, such as POSIX Attributes or SSH Login Details See Section 8.3, "Using ID Views to Define AD User Attributes" for details. Migrating from synchronization-based to trust-based integration See Section 7.2, "Migrate from Synchronization to Trust Manually Using ID Views" for details. Performing per-host group override of the IdM user attributes See Section 8.4, "Migrating NIS Domains to IdM" for details. 8.1. Active Directory Default Trust View 8.1.1. What Is the Default Trust View The Default Trust View is the default ID view always applied to AD users and groups in trust-based setups. It is created automatically when you establish the trust using ipa-adtrust-install and cannot be deleted. Using the Default Trust View, you can define custom POSIX attributes for AD users and groups, thus overriding the values defined in AD. Table 8.1. Applying the Default Trust View Values in AD Default Trust View Result Login ad_user ad_user ad_user UID 111 222 222 GID 111 (no value) 111 Note The Default Trust View only accepts overrides for AD users and groups, not for IdM users and groups. It is applied on the IdM server and clients and therefore only need to provide overrides for Active Directory users and groups. 8.1.2. Overriding the Default Trust View with Other ID Views If another ID view applied to the host overrides the attribute values in the Default Trust View, IdM applies the values from the host-specific ID view on top of the Default Trust View. If an attribute is defined in the host-specific ID view, IdM applies the value from this view. If an attribute is not defined in the host-specific ID view, IdM applies the value from the Default Trust View. The Default Trust View is always applied to IdM servers and replicas as well as to AD users and groups. You cannot assign a different ID view to them: they always apply the values from the Default Trust View. Table 8.2. Applying a Host-Specific ID View on Top of the Default Trust View Values in AD Default Trust View Host-Specific View Result Login ad_user ad_user (no value) ad_user UID 111 222 333 333 GID 111 (no value) 333 333 8.1.3. ID Overrides on Clients Based on the Client Version The IdM masters always apply ID overrides from the Default Trust View, regardless of how IdM clients retrieve the values: using SSSD or using Schema Compatibility tree requests. However, the availability of ID overrides from host-specific ID views is limited: Legacy clients: RHEL 6.3 and earlier (SSSD 1.8 and earlier) The clients can request a specific ID view to be applied. To use a host-specific ID view on a legacy client, change the base DN on the client to: cn= id_view_name ,cn=views,cn=compat,dc= example ,dc= com . RHEL 6.4 to 7.0 (SSSD 1.9 to 1.11) Host-specific ID views on the clients are not supported. RHEL 7.1 and later (SSSD 1.12 and later) Full support.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/id-views
Part V. Designing a decision service using spreadsheet decision tables
Part V. Designing a decision service using spreadsheet decision tables As a business analyst or business rules developer, you can define business rules in a tabular format in spreadsheet decision tables and then upload the spreadsheets to your project in Business Central. These rules are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. Note You can also design your decision service using Decision Model and Notation (DMN) models instead of rule-based or table-based assets. For information about DMN support in Red Hat Process Automation Manager 7.13, see the following resources: Getting started with decision services (step-by-step tutorial with a DMN decision service example) Designing a decision service using DMN models (overview of DMN support and capabilities in Red Hat Process Automation Manager) Prerequisites The space and project for the decision tables have been created in Business Central. Each asset is associated with a project assigned to a space. For details, see Getting started with decision services .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assembly-decision-tables
Object Gateway for Production Guide
Object Gateway for Production Guide Red Hat Ceph Storage 4 Planning, designing and deploying Ceph Storage clusters and Ceph Object Gateway clusters for production. Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_for_production_guide/index
Chapter 1. Overview of accelerators
Chapter 1. Overview of accelerators If you work with large data sets, you can use accelerators to optimize the performance of your data science models in OpenShift AI. With accelerators, you can scale your work, reduce latency, and increase productivity. You can use accelerators in OpenShift AI to assist your data scientists in the following tasks: Natural language processing (NLP) Inference Training deep neural networks Data cleansing and data processing OpenShift AI supports the following accelerators: NVIDIA graphics processing units (GPUs) To use compute-heavy workloads in your models, you can enable NVIDIA graphics processing units (GPUs) in OpenShift AI. To enable NVIDIA GPUs on OpenShift, you must install the NVIDIA GPU Operator . AMD graphics processing units (GPUs) Use the AMD GPU Operator to enable AMD GPUs for workloads such as AI/ML training and inference. To enable AMD GPUs on OpenShift, you must do the following tasks: Install the AMD GPU Operator. Follow the instructions for full deployment and driver configuration in the AMD GPU Operator documentation . Once installed, the AMD GPU Operator allows you to use the ROCm workbench images to streamline AI/ML workflows on AMD GPUs. Intel Gaudi AI accelerators Intel provides hardware accelerators intended for deep learning workloads. Before you can enable Intel Gaudi AI accelerators in OpenShift AI, you must install the necessary dependencies. Also, the version of the Intel Gaudi AI Operator that you install must match the version of the corresponding workbench image in your deployment. A workbench image for Intel Gaudi accelerators is not included in OpenShift AI by default. Instead, you must create and configure a custom notebook to enable Intel Gaudi AI support. You can enable Intel Gaudi AI accelerators on-premises or with AWS DL1 compute nodes on an AWS instance. Before you can use an accelerator in OpenShift AI, you must enable GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . In addition, your OpenShift instance must contain an associated accelerator profile. For accelerators that are new to your deployment, you must configure an accelerator profile for the accelerator in context. You can create an accelerator profile from the Settings Accelerator profiles page on the OpenShift AI dashboard. If your deployment contains existing accelerators that had associated accelerator profiles already configured, an accelerator profile is automatically created after you upgrade to the latest version of OpenShift AI. Additional resources Habana, an Intel Company Amazon EC2 DL1 Instances AMD ROCm documentation AMD GPU Operator on GitHub lspci(8) - Linux man page
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_accelerators/overview-of-accelerators_accelerators
Chapter 6. Updating RHEL 9 content
Chapter 6. Updating RHEL 9 content With DNF , you can check if your system has any pending updates. You can list packages that need updating and choose to update a single package, multiple packages, or all packages at once. If any of the packages you choose to update have dependencies, these dependencies are updated as well. 6.1. Checking for updates To identify which packages installed on your system have available updates, you can list them. Procedure Check the available updates for installed packages: The output returns the list of packages and their dependencies that have an update available. 6.2. Updating packages You can use DNF to update a single package, a package group, or all packages and their dependencies at once. Important When applying updates to the kernel, dnf always installs a new kernel regardless of whether you are using the dnf upgrade or dnf install command. Note that this only applies to packages identified by using the installonlypkgs DNF configuration option. Such packages include, for example, the kernel , kernel-core , and kernel-modules packages. Procedure Depending on your scenario, use one of the following options to apply updates: To update all packages and their dependencies, enter: To update a single package, enter: To update packages only from a specific package group, enter: Important If you upgraded the GRUB boot loader packages on a BIOS or IBM Power system, reinstall GRUB. See Reinstalling GRUB . 6.3. Updating security-related packages You can use DNF to update security-related packages. Procedure Depending on your scenario, use one of the following options to apply updates: To upgrade to the latest available packages that have security errata, enter: To upgrade to the last security errata packages, enter: Important If you upgraded the GRUB boot loader packages on a BIOS or IBM Power system, reinstall GRUB. See Reinstalling GRUB . Additional resources Managing and monitoring security updates
[ "dnf check-update", "dnf upgrade", "dnf upgrade <package_name>", "dnf group upgrade <group_name>", "dnf upgrade --security", "dnf upgrade-minimal --security" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_updating-rhel-9-content_managing-software-with-the-dnf-tool
3.6. Tapsets
3.6. Tapsets Tapsets are scripts that form a library of pre-written probes and functions to be used in SystemTap scripts. When a user runs a SystemTap script, SystemTap checks the script's probe events and handlers against the tapset library; SystemTap then loads the corresponding probes and functions before translating the script to C (see Section 3.1, "Architecture" for information on what transpires in a SystemTap session). Like SystemTap scripts, tapsets use the file name extension .stp . The standard library of tapsets is located in the /usr/share/systemtap/tapset/ directory by default. However, unlike SystemTap scripts, tapsets are not meant for direct execution; rather, they constitute the library from which other scripts can pull definitions. The tapset library is an abstraction layer designed to make it easier for users to define events and functions. Tapsets provide useful aliases for functions that users may want to specify as an event; knowing the proper alias to use is, for the most part, easier than remembering specific kernel functions that might vary between kernel versions. Several handlers and functions in Section 3.2.1, "Event" and SystemTap Functions are defined in tapsets. For example, thread_indent() is defined in indent.stp .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_beginners_guide/understanding-tapsets
Chapter 114. Spring JDBC
Chapter 114. Spring JDBC Since Camel 3.10 Only producer is supported The Spring JDBC component is an extension of the JDBC component with one additional feature to integrate with Spring Transaction Manager. 114.1. Dependencies When using spring-jdbc with Red Hat build of Camel Spring Boot use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-jdbc-starter</artifactId> </dependency> The version is specified using BOM in the following way. <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 114.2. Configuring Options Camel components are configured on two levels: Component level Endpoint level 114.2.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 114.2.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 114.3. Component Options The Spring JDBC component supports 4 options that are listed below. Name Description Default Type dataSource (producer) To use the DataSource instance instead of looking up the data source by name from the registry. DataSource lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean connectionStrategy (advanced) To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions. ConnectionStrategy 114.4. Endpoint Options The Spring JDBC endpoint is configured using URI syntax: Following are the path and query parameters: 114.4.1. Path Parameters (1 parameters) Name Description Default Type dataSourceName (producer) Required Name of DataSource to lookup in the Registry. If the name is dataSource or default, then Camel will attempt to lookup a default DataSource from the registry, meaning if there is a only one instance of DataSource found, then this DataSource will be used. String 114.4.2. Query Parameters (14 parameters) Name Description Default Type allowNamedParameters (producer) Whether to allow using named parameters in the queries. true boolean outputClass (producer) Specify the full package and class name to use as conversion when outputType=SelectOne or SelectList. String outputType (producer) Determines the output the producer should use. Enum values: SelectOne SelectList StreamList SelectList JdbcOutputType parameters (producer) Optional parameters to the java.sql.Statement. For example to set maxRows, fetchSize etc. Map readSize (producer) The default maximum number of rows that can be read by a polling query. The default value is 0. int resetAutoCommit (producer) Camel will set the autoCommit on the JDBC connection to be false, commit the change after executed the statement and reset the autoCommit flag of the connection at the end, if the resetAutoCommit is true. If the JDBC connection doesn't support to reset the autoCommit flag, you can set the resetAutoCommit flag to be false, and Camel will not try to reset the autoCommit flag. When used with XA transactions you most likely need to set it to false so that the transaction manager is in charge of committing this tx. true boolean transacted (producer) Whether transactions are in use. false boolean useGetBytesForBlob (producer) To read BLOB columns as bytes instead of string data. This may be needed for certain databases such as Oracle where you must read BLOB columns as bytes. false boolean useHeadersAsParameters (producer) Set this option to true to use the prepareStatementStrategy with named parameters. This allows to define queries with named placeholders, and use headers with the dynamic values for the query placeholders. false boolean useJDBC4ColumnNameAndLabelSemantics (producer) Sets whether to use JDBC 4 or JDBC 3.0 or older semantic when retrieving column name. JDBC 4.0 uses columnLabel to get the column name where as JDBC 3.0 uses both columnName or columnLabel. Unfortunately JDBC drivers behave differently so you can use this option to work out issues around your JDBC driver if you get problem using this component This option is default true. true boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean beanRowMapper (advanced) To use a custom org.apache.camel.component.jdbc.BeanRowMapper when using outputClass. The default implementation will lower case the row names and skip underscores, and dashes. For example CUST_ID is mapped as custId. BeanRowMapper connectionStrategy (advanced) To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions. ConnectionStrategy prepareStatementStrategy (advanced) Allows the plugin to use a custom org.apache.camel.component.jdbc.JdbcPrepareStatementStrategy to control preparation of the query and prepared statement. JdbcPrepareStatementStrategy 114.5. Spring Boot Auto-Configuration The component supports 4 options that are listed below. Name Description Default Type camel.component.spring-jdbc.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.spring-jdbc.connection-strategy To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions. The option is a org.apache.camel.component.jdbc.ConnectionStrategy type. ConnectionStrategy camel.component.spring-jdbc.enabled Whether to enable auto configuration of the spring-jdbc component. This is enabled by default. Boolean camel.component.spring-jdbc.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-jdbc-starter</artifactId> </dependency>", "<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "spring-jdbc:dataSourceName" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-jdbc-component-starter
Chapter 6. Uninstalling a cluster on Nutanix
Chapter 6. Uninstalling a cluster on Nutanix You can remove a cluster that you deployed to Nutanix. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_nutanix/uninstalling-cluster-nutanix
Chapter 45. PolicyService
Chapter 45. PolicyService 45.1. CancelDryRunJob DELETE /v1/policies/dryrunjob/{jobId} 45.1.1. Description 45.1.2. Parameters 45.1.2.1. Path Parameters Name Description Required Default Pattern jobId X null 45.1.3. Return Type Object 45.1.4. Content Type application/json 45.1.5. Responses Table 45.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 45.1.6. Samples 45.1.7. Common object reference 45.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.2. QueryDryRunJobStatus GET /v1/policies/dryrunjob/{jobId} 45.2.1. Description 45.2.2. Parameters 45.2.2.1. Path Parameters Name Description Required Default Pattern jobId X null 45.2.3. Return Type V1DryRunJobStatusResponse 45.2.4. Content Type application/json 45.2.5. Responses Table 45.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1DryRunJobStatusResponse 0 An unexpected error response. RuntimeError 45.2.6. Samples 45.2.7. Common object reference 45.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.2.7.3. V1DryRunJobStatusResponse Field Name Required Nullable Type Description Format pending Boolean result V1DryRunResponse 45.2.7.4. V1DryRunResponse Field Name Required Nullable Type Description Format alerts List of V1DryRunResponseAlert 45.2.7.5. V1DryRunResponseAlert Field Name Required Nullable Type Description Format deployment String violations List of string 45.3. SubmitDryRunPolicyJob POST /v1/policies/dryrunjob 45.3.1. Description 45.3.2. Parameters 45.3.2.1. Body Parameter Name Description Required Default Pattern body StoragePolicy X 45.3.3. Return Type V1JobId 45.3.4. Content Type application/json 45.3.5. Responses Table 45.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1JobId 0 An unexpected error response. RuntimeError 45.3.6. Samples 45.3.7. Common object reference 45.3.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.3.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.3.7.4. StorageBooleanOperator Enum Values OR AND 45.3.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.3.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.3.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.3.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.3.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.3.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.3.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.3.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.3.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.3.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.3.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.3.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.3.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.3.7.18. V1JobId Field Name Required Nullable Type Description Format jobId String 45.4. DryRunPolicy POST /v1/policies/dryrun DryRunPolicy evaluates the given policy and returns any alerts without creating the policy. 45.4.1. Description 45.4.2. Parameters 45.4.2.1. Body Parameter Name Description Required Default Pattern body StoragePolicy X 45.4.3. Return Type V1DryRunResponse 45.4.4. Content Type application/json 45.4.5. Responses Table 45.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1DryRunResponse 0 An unexpected error response. RuntimeError 45.4.6. Samples 45.4.7. Common object reference 45.4.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.4.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.4.7.4. StorageBooleanOperator Enum Values OR AND 45.4.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.4.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.4.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.4.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.4.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.4.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.4.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.4.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.4.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.4.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.4.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.4.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.4.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.4.7.18. V1DryRunResponse Field Name Required Nullable Type Description Format alerts List of V1DryRunResponseAlert 45.4.7.19. V1DryRunResponseAlert Field Name Required Nullable Type Description Format deployment String violations List of string 45.5. ExportPolicies POST /v1/policies/export ExportPolicies takes a list of policy IDs and returns either the entire list of policies or an error message 45.5.1. Description 45.5.2. Parameters 45.5.2.1. Body Parameter Name Description Required Default Pattern body V1ExportPoliciesRequest X 45.5.3. Return Type StorageExportPoliciesResponse 45.5.4. Content Type application/json 45.5.5. Responses Table 45.5. HTTP Response Codes Code Message Datatype 200 A successful response. StorageExportPoliciesResponse 0 An unexpected error response. RuntimeError 45.5.6. Samples 45.5.7. Common object reference 45.5.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.5.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.5.7.4. StorageBooleanOperator Enum Values OR AND 45.5.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.5.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.5.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.5.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.5.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.5.7.10. StorageExportPoliciesResponse Field Name Required Nullable Type Description Format policies List of StoragePolicy 45.5.7.11. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.5.7.12. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.5.7.13. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.5.7.14. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.5.7.15. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.5.7.16. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.5.7.17. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.5.7.18. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.5.7.19. V1ExportPoliciesRequest Field Name Required Nullable Type Description Format policyIds List of string 45.6. PolicyFromSearch POST /v1/policies/from-search 45.6.1. Description 45.6.2. Parameters 45.6.2.1. Body Parameter Name Description Required Default Pattern body V1PolicyFromSearchRequest X 45.6.3. Return Type V1PolicyFromSearchResponse 45.6.4. Content Type application/json 45.6.5. Responses Table 45.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1PolicyFromSearchResponse 0 An unexpected error response. RuntimeError 45.6.6. Samples 45.6.7. Common object reference 45.6.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.6.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.6.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.6.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.6.7.4. StorageBooleanOperator Enum Values OR AND 45.6.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.6.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.6.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.6.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.6.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.6.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.6.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.6.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.6.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.6.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.6.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.6.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.6.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.6.7.18. V1PolicyFromSearchRequest Field Name Required Nullable Type Description Format searchParams String 45.6.7.19. V1PolicyFromSearchResponse Field Name Required Nullable Type Description Format policy StoragePolicy alteredSearchTerms List of string hasNestedFields Boolean 45.7. ListPolicies GET /v1/policies ListPolicies returns the list of policies. 45.7.1. Description 45.7.2. Parameters 45.7.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 45.7.3. Return Type V1ListPoliciesResponse 45.7.4. Content Type application/json 45.7.5. Responses Table 45.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListPoliciesResponse 0 An unexpected error response. RuntimeError 45.7.6. Samples 45.7.7. Common object reference 45.7.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.7.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.7.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.7.7.3. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.7.7.4. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.7.7.5. StorageListPolicy Field Name Required Nullable Type Description Format id String name String description String severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, disabled Boolean lifecycleStages List of StorageLifecycleStage notifiers List of string lastUpdated Date date-time eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, isDefault Boolean 45.7.7.6. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.7.7.7. V1ListPoliciesResponse Field Name Required Nullable Type Description Format policies List of StorageListPolicy 45.8. DeletePolicy DELETE /v1/policies/{id} DeletePolicy removes a policy by ID. 45.8.1. Description 45.8.2. Parameters 45.8.2.1. Path Parameters Name Description Required Default Pattern id X null 45.8.3. Return Type Object 45.8.4. Content Type application/json 45.8.5. Responses Table 45.8. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 45.8.6. Samples 45.8.7. Common object reference 45.8.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.8.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.8.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.9. GetPolicy GET /v1/policies/{id} GetPolicy returns the requested policy by ID. 45.9.1. Description 45.9.2. Parameters 45.9.2.1. Path Parameters Name Description Required Default Pattern id X null 45.9.3. Return Type StoragePolicy 45.9.4. Content Type application/json 45.9.5. Responses Table 45.9. HTTP Response Codes Code Message Datatype 200 A successful response. StoragePolicy 0 An unexpected error response. RuntimeError 45.9.6. Samples 45.9.7. Common object reference 45.9.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.9.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.9.7.4. StorageBooleanOperator Enum Values OR AND 45.9.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.9.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.9.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.9.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.9.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.9.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.9.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.9.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.9.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.9.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.9.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.9.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.9.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.10. GetPolicyMitreVectors GET /v1/policies/{id}/mitrevectors GetMitreVectorsForPolicy returns the requested policy by ID. 45.10.1. Description 45.10.2. Parameters 45.10.2.1. Path Parameters Name Description Required Default Pattern id X null 45.10.2.2. Query Parameters Name Description Required Default Pattern options.excludePolicy If set to true, policy is excluded from the response. - null 45.10.3. Return Type V1GetPolicyMitreVectorsResponse 45.10.4. Content Type application/json 45.10.5. Responses Table 45.10. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetPolicyMitreVectorsResponse 0 An unexpected error response. RuntimeError 45.10.6. Samples 45.10.7. Common object reference 45.10.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.10.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.10.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.10.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.10.7.4. StorageBooleanOperator Enum Values OR AND 45.10.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.10.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.10.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.10.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.10.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.10.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.10.7.11. StorageMitreAttackVector Field Name Required Nullable Type Description Format tactic StorageMitreTactic techniques List of StorageMitreTechnique 45.10.7.12. StorageMitreTactic Field Name Required Nullable Type Description Format id String name String description String 45.10.7.13. StorageMitreTechnique Field Name Required Nullable Type Description Format id String name String description String 45.10.7.14. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.10.7.15. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.10.7.16. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.10.7.17. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.10.7.18. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.10.7.19. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.10.7.20. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.10.7.21. V1GetPolicyMitreVectorsResponse Field Name Required Nullable Type Description Format policy StoragePolicy vectors List of StorageMitreAttackVector 45.11. PatchPolicy PATCH /v1/policies/{id} PatchPolicy edits an existing policy. 45.11.1. Description 45.11.2. Parameters 45.11.2.1. Path Parameters Name Description Required Default Pattern id X null 45.11.2.2. Body Parameter Name Description Required Default Pattern body V1PatchPolicyRequest X 45.11.3. Return Type Object 45.11.4. Content Type application/json 45.11.5. Responses Table 45.11. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 45.11.6. Samples 45.11.7. Common object reference 45.11.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.11.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.11.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.11.7.3. V1PatchPolicyRequest Field Name Required Nullable Type Description Format id String disabled Boolean 45.12. PutPolicy PUT /v1/policies/{id} PutPolicy modifies an existing policy. 45.12.1. Description 45.12.2. Parameters 45.12.2.1. Path Parameters Name Description Required Default Pattern id X null 45.12.2.2. Body Parameter Name Description Required Default Pattern body StoragePolicy X 45.12.3. Return Type Object 45.12.4. Content Type application/json 45.12.5. Responses Table 45.12. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 45.12.6. Samples 45.12.7. Common object reference 45.12.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.12.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.12.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.12.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.12.7.4. StorageBooleanOperator Enum Values OR AND 45.12.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.12.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.12.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.12.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.12.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.12.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.12.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.12.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.12.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.12.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.12.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.12.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.12.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.13. ImportPolicies POST /v1/policies/import ImportPolicies accepts a list of Policies and returns a list of the policies which could not be imported 45.13.1. Description 45.13.2. Parameters 45.13.2.1. Body Parameter Name Description Required Default Pattern body V1ImportPoliciesRequest X 45.13.3. Return Type V1ImportPoliciesResponse 45.13.4. Content Type application/json 45.13.5. Responses Table 45.13. HTTP Response Codes Code Message Datatype 200 A successful response. V1ImportPoliciesResponse 0 An unexpected error response. RuntimeError 45.13.6. Samples 45.13.7. Common object reference 45.13.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.13.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.13.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.13.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.13.7.4. StorageBooleanOperator Enum Values OR AND 45.13.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.13.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.13.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.13.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.13.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.13.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.13.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.13.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.13.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.13.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.13.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.13.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.13.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.13.7.18. V1ImportPoliciesMetadata Field Name Required Nullable Type Description Format overwrite Boolean 45.13.7.19. V1ImportPoliciesRequest Field Name Required Nullable Type Description Format metadata V1ImportPoliciesMetadata policies List of StoragePolicy 45.13.7.20. V1ImportPoliciesResponse Field Name Required Nullable Type Description Format responses List of V1ImportPolicyResponse allSucceeded Boolean 45.13.7.21. V1ImportPolicyError Field Name Required Nullable Type Description Format message String type String duplicateName String validationError String 45.13.7.22. V1ImportPolicyResponse Field Name Required Nullable Type Description Format succeeded Boolean policy StoragePolicy errors List of V1ImportPolicyError 45.14. EnableDisablePolicyNotification PATCH /v1/policies/{policyId}/notifiers EnableDisablePolicyNotification enables or disables notifications for a policy by ID. 45.14.1. Description 45.14.2. Parameters 45.14.2.1. Path Parameters Name Description Required Default Pattern policyId X null 45.14.2.2. Body Parameter Name Description Required Default Pattern body V1EnableDisablePolicyNotificationRequest X 45.14.3. Return Type Object 45.14.4. Content Type application/json 45.14.5. Responses Table 45.14. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 45.14.6. Samples 45.14.7. Common object reference 45.14.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.14.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.14.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.14.7.3. V1EnableDisablePolicyNotificationRequest Field Name Required Nullable Type Description Format policyId String notifierIds List of string disable Boolean 45.15. PostPolicy POST /v1/policies PostPolicy creates a new policy. 45.15.1. Description 45.15.2. Parameters 45.15.2.1. Body Parameter Name Description Required Default Pattern body StoragePolicy X 45.15.2.2. Query Parameters Name Description Required Default Pattern enableStrictValidation - null 45.15.3. Return Type StoragePolicy 45.15.4. Content Type application/json 45.15.5. Responses Table 45.15. HTTP Response Codes Code Message Datatype 200 A successful response. StoragePolicy 0 An unexpected error response. RuntimeError 45.15.6. Samples 45.15.7. Common object reference 45.15.7.1. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 45.15.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.15.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.15.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.15.7.4. StorageBooleanOperator Enum Values OR AND 45.15.7.5. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 45.15.7.6. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 45.15.7.7. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 45.15.7.8. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 45.15.7.9. StorageExclusionImage Field Name Required Nullable Type Description Format name String 45.15.7.10. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 45.15.7.11. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 45.15.7.12. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 45.15.7.13. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 45.15.7.14. StoragePolicyValue Field Name Required Nullable Type Description Format value String 45.15.7.15. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 45.15.7.16. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 45.15.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 45.16. ReassessPolicies POST /v1/policies/reassess ReassessPolicies reevaluates all the policies. 45.16.1. Description 45.16.2. Parameters 45.16.3. Return Type Object 45.16.4. Content Type application/json 45.16.5. Responses Table 45.16. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 45.16.6. Samples 45.16.7. Common object reference 45.16.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.16.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.16.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.17. GetPolicyCategories GET /v1/policyCategories GetPolicyCategories returns the policy categories. 45.17.1. Description 45.17.2. Parameters 45.17.3. Return Type V1PolicyCategoriesResponse 45.17.4. Content Type application/json 45.17.5. Responses Table 45.17. HTTP Response Codes Code Message Datatype 200 A successful response. V1PolicyCategoriesResponse 0 An unexpected error response. RuntimeError 45.17.6. Samples 45.17.7. Common object reference 45.17.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 45.17.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 45.17.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 45.17.7.3. V1PolicyCategoriesResponse Field Name Required Nullable Type Description Format categories List of string
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "ExportPoliciesResponse is used by the API but it is defined in storage because we expect customers to store them. We do backwards-compatibility checks on objects in the storge folder and those checks should be applied to this object", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/policyservice
Chapter 2. Prerequisites
Chapter 2. Prerequisites Installer-provisioned installation of OpenShift Container Platform requires: One provisioner node with Red Hat Enterprise Linux (RHEL) 9.x installed. The provisioner can be removed after installation. Three control plane nodes Baseboard management controller (BMC) access to each node At least one network: One required routable network One optional provisioning network One optional management network Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements. 2.1. Node requirements Installer-provisioned installation involves a number of hardware node requirements: CPU architecture: All nodes must use x86_64 or aarch64 CPU architecture. Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration. Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol. Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 9.x ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 9.x for the provisioner node and RHCOS 9.x for the control plane and worker nodes. Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node. Provisioner node: Installer-provisioned installation requires one provisioner node. Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing. Worker nodes: While not required, a typical production cluster has two or more worker nodes. Important Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state. Network interfaces: Each node must have at least one network interface for the routable baremetal network. Each node must have one network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Note Only one network card (NIC) on the same subnet can route traffic through the gateway. By default, Address Resolution Protocol (ARP) uses the lowest numbered NIC. Use a single NIC for each node in the same subnet to ensure that network load balancing works as expected. When using multiple NICs for a node in the same subnet, use a single bond or team interface. Then add the other IP addresses to that interface in the form of an alias IP address. If you require fault tolerance or load balancing at the network interface level, use an alias IP address on the bond or team interface. Alternatively, you can disable a secondary NIC on the same subnet or ensure that it has no IP address. Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement. Important When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail. Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed. Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details. Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the bootMode value to UEFISecureBoot in the install-config.yaml file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version 2.75.75.75 or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details. Note Red Hat does not support managing self-generated keys, or other keys, for Secure Boot. 2.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.1. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHEL 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 2.3. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 2.4. Firmware requirements for installing with virtual media The installation program for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The installation program does not begin installation on a node if the node firmware is not compatible. The following tables list the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. Note Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy . For information about updating the firmware, see the hardware documentation for the nodes or contact the hardware vendor. Table 2.2. Firmware compatibility for HP hardware with Redfish virtual media Model Management Firmware versions 10th Generation iLO5 2.63 or later Table 2.3. Firmware compatibility for Dell hardware with Redfish virtual media Model Management Firmware versions 15th Generation iDRAC 9 v6.10.30.00 14th Generation iDRAC 9 v6.10.30.00 13th Generation iDRAC 8 v2.75.75.75 or later Additional resources Unable to discover new bare metal hosts using the BMC 2.5. Network requirements Installer-provisioned installation of OpenShift Container Platform involves multiple network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare-metal node. Second, installer-provisioned installation involves a routable baremetal network. 2.5.1. Ensuring required ports are open Certain ports must be open between cluster nodes for installer-provisioned installations to complete successfully. In certain situations, such as using separate subnets for far edge worker nodes, you must ensure that the nodes in these subnets can communicate with nodes in the other subnets on the following required ports. Table 2.4. Required ports Port Description 67 , 68 When using a provisioning network, cluster nodes access the dnsmasq DHCP server over their provisioning network interfaces using ports 67 and 68 . 69 When using a provisioning network, cluster nodes communicate with the TFTP server on port 69 using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node. 80 When not using the image caching option or when using virtual media, the provisioner node must have port 80 open on the baremetal machine network interface to stream the Red Hat Enterprise Linux CoreOS (RHCOS) image from the provisioner node to the cluster nodes. 123 The cluster nodes must access the NTP server on port 123 using the baremetal machine network. 5050 The Ironic Inspector API runs on the control plane nodes and listens on port 5050 . The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare-metal nodes. 5051 Port 5050 uses port 5051 as a proxy. 6180 When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port 6180 open on the baremetal machine network interface so that the baseboard management controller (BMC) of the worker nodes can access the RHCOS image. Starting with OpenShift Container Platform 4.13, the default HTTP port is 6180 . 6183 When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port 6183 open on the baremetal machine network interface so that the BMC of the worker nodes can access the RHCOS image. 6385 The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port 6385 . The Ironic API allows clients to interact with Ironic for bare-metal node provisioning and management, including operations such as enrolling new nodes, managing their power state, deploying images, and cleaning the hardware. 6388 Port 6385 uses port 6388 as a proxy. 8080 When using image caching without TLS, port 8080 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 8083 When using the image caching option with TLS, port 8083 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 9999 By default, the Ironic Python Agent (IPA) listens on TCP port 9999 for API calls from the Ironic conductor service. Communication between the bare-metal node where IPA is running and the Ironic conductor service uses this port. 2.5.2. Increase the network MTU Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation. 2.5.3. Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for the provisioning network on each cluster node must have the BIOS or UEFI configured to PXE boot. The provisioningNetworkInterface configuration setting specifies the provisioning network NIC name on the control plane nodes, which must be identical on the control plane nodes. The bootMACAddress configuration setting provides a means to specify a particular NIC on each node for the provisioning network. The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . baremetal : The baremetal network is a routable network. You can use any NIC to interface with the baremetal network provided the NIC is not configured to use the provisioning network. Important When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network. 2.5.4. DNS requirements Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name. <cluster_name>.<base_domain> For example: test-cluster.example.com OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. CoreDNS requires both TCP and UDP connections to the upstream DNS server to function correctly. Ensure the upstream DNS server can receive both TCP and UDP connections from OpenShift Container Platform cluster nodes. In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard ingress API A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes. Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Routes *.apps.<cluster_name>.<base_domain>. The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Tip You can use the dig command to verify DNS resolution. 2.5.5. Dynamic Host Configuration Protocol (DHCP) requirements By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed , which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal network on an external DHCP server. 2.5.6. Reserving IP addresses for nodes with the DHCP server For the baremetal network, a network administrator must reserve several IP addresses, including: Two unique virtual IP addresses. One virtual IP address for the API endpoint. One virtual IP address for the wildcard ingress endpoint. One IP address for the provisioner node. One IP address for each control plane node. One IP address for each worker node, if applicable. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring node network interfaces" in the "Setting up the environment for an OpenShift installation" section. Networking between external load balancers and control plane nodes External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Important The storage interface requires a DHCP reservation or a static IP. The following table provides an exemplary embodiment of fully qualified domain names. The API and name server addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<base_domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<base_domain> <ip> Provisioner node provisioner.<cluster_name>.<base_domain> <ip> Control-plane-0 openshift-control-plane-0.<cluster_name>.<base_domain> <ip> Control-plane-1 openshift-control-plane-1.<cluster_name>-.<base_domain> <ip> Control-plane-2 openshift-control-plane-2.<cluster_name>.<base_domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<base_domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<base_domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<base_domain> <ip> Note If you do not create DHCP reservations, the installation program requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. 2.5.7. Provisioner node requirements You must specify the MAC address for the provisioner node in your installation configuration. The bootMacAddress specification is typically associated with PXE network booting. However, the Ironic provisioning service also requires the bootMacAddress specification to identify nodes during the inspection of the cluster, or during node redeployment in the cluster. The provisioner node requires layer 2 connectivity for network booting, DHCP and DNS resolution, and local network communication. The provisioner node requires layer 3 connectivity for virtual media booting. 2.5.8. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL/TLS certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes. 2.5.9. Port access for the out-of-band management IP address The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port 6180 on the provisioner node and on the OpenShift Container Platform control plane nodes. TLS port 6183 is required for virtual media installation, for example, by using Redfish. Additional resources Using DNS forwarding 2.6. Configuring nodes Configuring nodes when using the provisioning network Each node in the cluster requires the following configuration for proper installation. Warning A mismatch between nodes will cause an installation failure. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> The Red Hat Enterprise Linux (RHEL) 9.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 9.x using a local Satellite server or a PXE server, PXE-enable NIC2. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. PXE-enabled is optional. 2 Note Ensure PXE is disabled on all other NICs. Configure the control plane and worker nodes as follows: PXE Boot order NIC1 PXE-enabled (provisioning network) 1 Configuring nodes without the provisioning network The installation process requires one NIC: NIC Network VLAN NICx baremetal <baremetal_vlan> NICx is a routable network ( baremetal ) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet. Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . Configuring nodes for Secure Boot manually Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. Note Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media. To enable Secure Boot manually, refer to the hardware guide for the node and execute the following: Procedure Boot the node and enter the BIOS menu. Set the node's boot mode to UEFI Enabled . Enable Secure Boot. Important Red Hat does not support Secure Boot with self-generated keys. 2.7. Out-of-band management Nodes typically have an additional NIC used by the baseboard management controllers (BMCs). These BMCs must be accessible from the provisioner node. Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform installation. The out-of-band management setup is out of scope for this document. Using a separate management network for out-of-band management can enhance performance and improve security. However, using the provisioning network or the bare metal network are valid options. Note The bootstrap VM features a maximum of two network interfaces. If you configure a separate management network for out-of-band management, and you are using a provisioning network, the bootstrap VM requires routing access to the management network through one of the network interfaces. In this scenario, the bootstrap VM can then access three networks: the bare metal network the provisioning network the management network routed through one of the network interfaces 2.8. Required data for installation Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes: Out-of-band management IP Examples Dell (iDRAC) IP HP (iLO) IP Fujitsu (iRMC) IP When using the provisioning network NIC ( provisioning ) MAC address NIC ( baremetal ) MAC address When omitting the provisioning network NIC ( baremetal ) MAC address 2.9. Validation checklist for nodes When using the provisioning network ❏ NIC1 VLAN is configured for the provisioning network. ❏ NIC1 for the provisioning network is PXE-enabled on the provisioner, control plane, and worker nodes. ❏ NIC2 VLAN is configured for the baremetal network. ❏ PXE has been disabled on all other NICs. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation. When omitting the provisioning network ❏ NIC1 VLAN is configured for the baremetal network. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation.
[ "<cluster_name>.<base_domain>", "test-cluster.example.com" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-prerequisites
8.11. Controlling teamd with teamdctl
8.11. Controlling teamd with teamdctl In order to query a running instance of teamd for statistics or configuration information, or to make changes, the control tool teamdctl is used. To view the current team state of a team team0 , enter the following command as root : For a more verbose output: For a complete state dump in JSON format (useful for machine processing) of team0 , use the following command: For a configuration dump in JSON format of team0 , use the following command: To view the configuration of a port em1 , that is part of a team team0 , enter the following command: 8.11.1. Add a Port to a Network Team To add a port em1 to a network team team0 , issue the following command as root : Important If using teamdctl directly to add a port, the port must be set to down . Otherwise the teamdctl team0 port add em1 command will fail. 8.11.2. Remove a Port From a Network Team To remove an interface em1 from a network team team0 , issue the following command as root : 8.11.3. Applying a Sticky Setting to a Port in a Network Team You can use the teamdctl command to apply a sticky setting to ensure that a specific port is used as an active link when it is available. Prerequisites You already created a team of network interfaces. As a result, you have a port ( em1 ) that you want to update the configuration of. Procedure To apply a JSON format configuration to a port em1 in a network team team0 , run the following commands: Update the configuration of the sticky setting for em1 : Remove em1 : Add em1 again so that the sticky setting takes effect: Note that the old configuration will be overwritten and that any options omitted will be reset to the default values. See the teamdctl(8) man page for more team daemon control tool command examples. 8.11.4. View the Configuration of a Port in a Network Team To copy the configuration of a port em1 in a network team team0 , issue the following command as root : This will dump the JSON format configuration of the port to standard output.
[ "~]# teamdctl team0 state view", "~]# teamdctl team0 state view -v", "~]# teamdctl team0 state dump", "~]# teamdctl team0 config dump", "~]# teamdctl team0 port config dump em1", "~]# teamdctl team0 port add em1", "~]# teamdctl team0 port remove em1", "~]# teamdctl team0 port config update em1 '{ \"prio\": 100, \"sticky\": true }'", "~]# teamdctl team0 port remove em1", "~]# teamdctl team0 port add em1", "~]# teamdctl team0 port config dump em1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-controlling_teamd_with_teamdctl
18.5. Isolated Mode
18.5. Isolated Mode When using Isolated mode , guests connected to the virtual switch can communicate with each other, and with the host physical machine, but their traffic will not pass outside of the host physical machine, nor can they receive traffic from outside the host physical machine. Using dnsmasq in this mode is required for basic functionality such as DHCP. However, even if this network is isolated from any physical network, DNS names are still resolved. Therefore a situation can arise when DNS names resolve but ICMP echo request (ping) commands fail. Figure 18.6. Virtual network switch in isolated mode
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-iso-mode
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 7 is available on the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ (big endian) IBM POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM POWER9 (little endian) [4] [5] IBM Z [4] [6] 64-bit ARM [4] The Red Hat Enterprise Linux 7.7 is distributed with the kernel version 3.10.0-1062, which provides support for the following architectures: 64-bit AMD 64-bit Intel IBM POWER7+ (big endian) IBM POWER8 (big endian) IBM POWER8 (little endian) IBM Z (kernel version 3.10) The following architectures remain fully supported and continue to receive z-stream security and bug fix updates in accordance with the Red Hat Enterprise Linux Life Cycle : IBM POWER9 (little endian) IBM Z - Structure A (kernel version 4.14) 64-bit ARM [1] Note that the Red Hat Enterprise Linux 7 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] This architecture is supported with the kernel version 4.14, provided by the kernel-alt packages. For details, see the Red Hat Enterprise Linux 7.5 Release Notes . [5] Red Hat Enterprise Linux 7 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM. [6] Red Hat Enterprise Linux 7 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.7_release_notes/architectures
Chapter 8. KVM Guest Timing Management
Chapter 8. KVM Guest Timing Management Virtualization involves several challenges for time keeping in guest virtual machines. Interrupts cannot always be delivered simultaneously and instantaneously to all guest virtual machines. This is because interrupts in virtual machines are not true interrupts. Instead, they are injected into the guest virtual machine by the host machine. The host may be running another guest virtual machine, or a different process. Therefore, the precise timing typically required by interrupts may not always be possible. Guest virtual machines without accurate time keeping may experience issues with network applications and processes, as session validity, migration, and other network activities rely on timestamps to remain correct. KVM avoids these issues by providing guest virtual machines with a paravirtualized clock ( kvm-clock ). However, it is still important to test timing before attempting activities that may be affected by time keeping inaccuracies, such as guest migration. Important To avoid the problems described above, the Network Time Protocol (NTP) should be configured on the host and the guest virtual machines. On guests using Red Hat Enterprise Linux 6 and earlier, NTP is implemented by the ntpd service. For more information, see the Red Hat Enterprise 6 Deployment Guide . On systems using Red Hat Enterprise Linux 7, NTP time synchronization service can be provided by ntpd or by the chronyd service. Note that Chrony has some advantages on virtual machines. For more information, see the Configuring NTP Using the chrony Suite and Configuring NTP Using ntpd sections in the Red Hat Enterprise Linux 7 System Administrator's Guide. The mechanics of guest virtual machine time synchronization By default, the guest synchronizes its time with the hypervisor as follows: When the guest system boots, the guest reads the time from the emulated Real Time Clock (RTC). When the NTP protocol is initiated, it automatically synchronizes the guest clock. Afterwards, during normal guest operation, NTP performs clock adjustments in the guest. When a guest is resumed after a pause or a restoration process, a command to synchronize the guest clock to a specified value should be issued by the management software (such as virt-manager ). This synchronization works only if the QEMU guest agent is installed in the guest and supports the feature. The value to which the guest clock synchronizes is usually the host clock value. Constant Time Stamp Counter (TSC) Modern Intel and AMD CPUs provide a constant Time Stamp Counter (TSC). The count frequency of the constant TSC does not vary when the CPU core itself changes frequency, for example to comply with a power-saving policy. A CPU with a constant TSC frequency is necessary in order to use the TSC as a clock source for KVM guests. Your CPU has a constant Time Stamp Counter if the constant_tsc flag is present. To determine if your CPU has the constant_tsc flag enter the following command: If any output is given, your CPU has the constant_tsc bit. If no output is given, follow the instructions below. Configuring Hosts without a Constant Time Stamp Counter Systems without a constant TSC frequency cannot use the TSC as a clock source for virtual machines, and require additional configuration. Power management features interfere with accurate time keeping and must be disabled for guest virtual machines to accurately keep time with KVM. Important These instructions are for AMD revision F CPUs only. If the CPU lacks the constant_tsc bit, disable all power management features . Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append processor.max_cstate=1 to the kernel boot. To make this change persistent, edit values of the GRUB_CMDLINE_LINUX key in the /etc/default/grub file. For example. if you want to enable emergency mode for each boot, edit the entry as follows: Note that you can specify multiple parameters for the GRUB_CMDLINE_LINUX key, similarly to adding the parameters in the GRUB 2 boot menu. To disable cpufreq (only necessary on hosts without the constant_tsc ), install kernel-tools and enable the cpupower.service ( systemctl enable cpupower.service ). If you want to disable this service every time the guest virtual machine boots, change the configuration file in /etc/sysconfig/cpupower and change the CPUPOWER_START_OPTS and CPUPOWER_STOP_OPTS. Valid limits can be found in the /sys/devices/system/cpu/ cpuid /cpufreq/scaling_available_governors files. For more information on this package or on power management and governors, see the Red Hat Enterprise Linux 7 Power Management Guide . 8.1. Host-wide Time Synchronization Virtual network devices in KVM guests do not support hardware timestamping, which means it is difficult to synchronize the clocks of guests that use a network protocol like NTP or PTP with better accuracy than tens of microseconds. When a more accurate synchronization of the guests is required, it is recommended to synchronize the clock of the host using NTP or PTP with hardware timestamping, and to synchronize the guests to the host directly. Red Hat Enterprise Linux 7.5 and later provide a virtual PTP hardware clock (PHC), which enables the guests to synchronize to the host with a sub-microsecond accuracy. Important Note that for PHC to work properly, both the host and the guest need be using RHEL 7.5 or later as the operating system (OS). To enable the PHC device, do the following on the guest OS: Set the ptp_kvm module to load after reboot. Add the /dev/ptp0 clock as a reference to the chrony configuration: Restart the chrony daemon: To verify the host-guest time synchronization has been configured correctly, use the chronyc sources command on a guest. The output should look similar to the following:
[ "cat /proc/cpuinfo | grep constant_tsc", "GRUB_CMDLINE_LINUX=\"emergency\"", "echo ptp_kvm > /etc/modules-load.d/ptp_kvm.conf", "echo \"refclock PHC /dev/ptp0 poll 2\" >> /etc/chrony.conf", "systemctl restart chronyd", "chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== #* PHC0 0 2 377 4 -6ns[ -6ns] +/- 726ns" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-KVM_guest_timing_management
Part III. Monitor Your Cache
Part III. Monitor Your Cache
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-monitor_your_cache
Chapter 11. Removing a Directory Server instance
Chapter 11. Removing a Directory Server instance If you no longer require a Directory Server instance, you can remove it to regain disk space. If you run multiple instances on one server, removing a specific instance does not affect the other instances. 11.1. Removing an instance using the command line You can remove a Directory Server instance using the command line. Prerequisites The instance has been removed from a replication topology, if it was part of one. Procedure Optional: Create a backup of the Directory Server directories: Stop the instance: # dsctl instance_name stop Copy the /var/lib/dirsrv/slapd- instance_name / directory: # cp -rp /var/lib/dirsrv/slapd- instance_name / /root/var-lib-dirsrv- instance_name .bak/ This directory contains the database, as well as the backup and export directory. Copy the /etc/dirsrv/slapd- instance_name / directory: # cp -rp /etc/dirsrv/slapd- instance_name / /root/etc-dirsrv- instance_name .bak/ Remove the instance: # dsctl instance_name remove --do-it Removing instance ... Completed instance removal Verification Verify that the /var/lib/dirsrv/slapd- instance_name / and /etc/dirsrv/slapd- instance_name / directories have been removed: # ls /var/lib/dirsrv/slapd- instance_name /etc/dirsrv/slapd- instance_name / ls: cannot access '/var/lib/dirsrv/slapd- instance_name ': No such file or directory ls: cannot access '/etc/dirsrv/slapd- instance_name ': No such file or directory Additional resources Removing an instance from a replication topology 11.2. Removing an instance using the web console You can remove a Directory Server instance using the web console. However, if you want to create a backup of the Directory Server directories which contain, for example, the databases and configuration files, you must copy these directories on the command line. Prerequisites The instance has been removed from a replication topology, if it was part of one. You are logged in to the instance in the web console. Procedure Optional: Create a backup of the Directory Server directories. Click the Actions button, and select Stop instance . Copy the /var/lib/dirsrv/slapd- instance_name / directory: # cp -rp /var/lib/dirsrv/slapd- instance_name / /root/var-lib-dirsrv- instance_name .bak/ This directory contains the database, as well as the backup and export directory. Copy the /etc/dirsrv/slapd- instance_name / directory: # cp -rp /var/lib/dirsrv/slapd- instance_name / /root/etc-dirsrv- instance_name .bak/ Click the Actions button, and select Remove this instance . Select Yes, I am sure , and click Remove Instance to confirm. Verification Verify that the /var/lib/dirsrv/slapd- instance_name / and /etc/dirsrv/slapd- instance_name / directories have been removed: # ls /var/lib/dirsrv/slapd- instance_name /etc/dirsrv/slapd- instance_name / ls: cannot access '/var/lib/dirsrv/slapd- instance_name ': No such file or directory ls: cannot access '/etc/dirsrv/slapd- instance_name ': No such file or directory Additional resources Removing an instance from a replication topology
[ "dsctl instance_name stop", "cp -rp /var/lib/dirsrv/slapd- instance_name / /root/var-lib-dirsrv- instance_name .bak/", "cp -rp /etc/dirsrv/slapd- instance_name / /root/etc-dirsrv- instance_name .bak/", "dsctl instance_name remove --do-it Removing instance Completed instance removal", "ls /var/lib/dirsrv/slapd- instance_name /etc/dirsrv/slapd- instance_name / ls: cannot access '/var/lib/dirsrv/slapd- instance_name ': No such file or directory ls: cannot access '/etc/dirsrv/slapd- instance_name ': No such file or directory", "cp -rp /var/lib/dirsrv/slapd- instance_name / /root/var-lib-dirsrv- instance_name .bak/", "cp -rp /var/lib/dirsrv/slapd- instance_name / /root/etc-dirsrv- instance_name .bak/", "ls /var/lib/dirsrv/slapd- instance_name /etc/dirsrv/slapd- instance_name / ls: cannot access '/var/lib/dirsrv/slapd- instance_name ': No such file or directory ls: cannot access '/etc/dirsrv/slapd- instance_name ': No such file or directory" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_removing-a-directory-server-instance_installing-rhds
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.7/pr01
Preface
Preface The contents within this guide provide an overview of Clair for Red Hat Quay, running Clair on standalone Red Hat Quay and Operator deployments, and advanced Clair configuration.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/vulnerability_reporting_with_clair_on_red_hat_quay/pr01
8.4.2. Backup Technologies
8.4.2. Backup Technologies Red Hat Enterprise Linux comes with several different programs for backing up and restoring data. By themselves, these utility programs do not constitute a complete backup solution. However, they can be used as the nucleus of such a solution. Note As noted in Section 8.2.6.1, "Restoring From Bare Metal" , most computers based on the standard PC architecture do not possess the necessary functionality to boot directly from a backup tape. Consequently, Red Hat Enterprise Linux is not capable of performing a tape boot when running on such hardware. However, it is also possible to use your Red Hat Enterprise Linux CD-ROM as a system recovery environment; for more information see the chapter on basic system recovery in the System Administrators Guide . 8.4.2.1. tar The tar utility is well known among UNIX system administrators. It is the archiving method of choice for sharing ad-hoc bits of source code and files between systems. The tar implementation included with Red Hat Enterprise Linux is GNU tar , one of the more feature-rich tar implementations. Using tar , backing up the contents of a directory can be as simple as issuing a command similar to the following: This command creates an archive file called home-backup.tar in /mnt/backup/ . The archive contains the contents of the /home/ directory. The resulting archive file will be nearly as large as the data being backed up. Depending on the type of data being backed up, compressing the archive file can result in significant size reductions. The archive file can be compressed by adding a single option to the command: The resulting home-backup.tar.gz archive file is now gzip compressed [30] . There are many other options to tar ; to learn more about them, read the tar(1) man page. [30] The .gz extension is traditionally used to signify that the file has been compressed with gzip . Sometimes .tar.gz is shortened to .tgz to keep file names reasonably sized.
[ "tar cf /mnt/backup/home-backup.tar /home/", "tar czf /mnt/backup/home-backup.tar.gz /home/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-disaster-backups-tech
30.3. Configure Server Hinting (Library Mode)
30.3. Configure Server Hinting (Library Mode) In Red Hat JBoss Data Grid's Library mode, Server Hinting is configured at the transport level. The following is a Server Hinting sample configuration: Procedure 30.2. Configure Server Hinting for Library Mode The following configuration attributes are used to configure Server Hinting in JBoss Data Grid. The clusterName attribute specifies the name assigned to the cluster. The machineId attribute specifies the JVM instance that contains the original data. This is particularly useful for nodes with multiple JVMs and physical hosts with multiple virtual hosts. The rackId parameter specifies the rack that contains the original data, so that other racks are used for backups. The siteId parameter differentiates between nodes in different data centers replicating to each other. The listed parameters are optional in a JBoss Data Grid configuration. If machineId , rackId , or siteId are included in the configuration, TopologyAwareConsistentHashFactory is selected automatically, enabling Server Hinting. However, if Server Hinting is not configured, JBoss Data Grid's distribution algorithms are allowed to store replications in the same physical machine/rack/data center as the original data. Report a bug
[ "<transport clusterName = \"MyCluster\" machineId = \"LinuxServer01\" rackId = \"Rack01\" siteId = \"US-WestCoast\" />" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/configure_server_hinting_in_library_mode
26.3. Command Line Version
26.3. Command Line Version The Authentication Configuration Tool can also be run as a command line tool with no interface. The command line version can be used in a configuration script or a kickstart script. The authentication options are summarized in Table 26.1, "Command Line Options" . Note These options can also be found in the authconfig man page or by typing authconfig --help at a shell prompt. Table 26.1. Command Line Options Option Description --enableshadow Enable shadow passwords --disableshadow Disable shadow passwords --enablemd5 Enable MD5 passwords --disablemd5 Disable MD5 passwords --enablenis Enable NIS --disablenis Disable NIS --nisdomain= <domain> Specify NIS domain --nisserver= <server> Specify NIS server --enableldap Enable LDAP for user information --disableldap Disable LDAP for user information --enableldaptls Enable use of TLS with LDAP --disableldaptls Disable use of TLS with LDAP --enableldapauth Enable LDAP for authentication --disableldapauth Disable LDAP for authentication --ldapserver= <server> Specify LDAP server --ldapbasedn= <dn> Specify LDAP base DN --enablekrb5 Enable Kerberos --disablekrb5 Disable Kerberos --krb5kdc= <kdc> Specify Kerberos KDC --krb5adminserver= <server> Specify Kerberos administration server --krb5realm= <realm> Specify Kerberos realm --enablekrb5kdcdns Enable use of DNS to find Kerberos KDCs --disablekrb5kdcdns Disable use of DNS to find Kerberos KDCs --enablekrb5realmdns Enable use of DNS to find Kerberos realms --disablekrb5realmdns Disable use of DNS to find Kerberos realms --enablesmbauth Enable SMB --disablesmbauth Disable SMB --smbworkgroup= <workgroup> Specify SMB workgroup --smbservers= <server> Specify SMB servers --enablewinbind Enable winbind for user information by default --disablewinbind Disable winbind for user information by default --enablewinbindauth Enable winbindauth for authentication by default --disablewinbindauth Disable winbindauth for authentication by default --smbsecurity= <user|server|domain|ads> Security mode to use for Samba and winbind --smbrealm= <STRING> Default realm for Samba and winbind when security=ads --smbidmapuid= <lowest-highest> UID range winbind assigns to domain or ADS users --smbidmapgid= <lowest-highest> GID range winbind assigns to domain or ADS users --winbindseparator= <\> Character used to separate the domain and user part of winbind usernames if winbindusedefaultdomain is not enabled --winbindtemplatehomedir= </home/%D/%U> Directory that winbind users have as their home --winbindtemplateprimarygroup= <nobody> Group that winbind users have as their primary group --winbindtemplateshell= </bin/false> Shell that winbind users have as their default login shell --enablewinbindusedefaultdomain Configures winbind to assume that users with no domain in their usernames are domain users --disablewinbindusedefaultdomain Configures winbind to assume that users with no domain in their usernames are not domain users --winbindjoin= <Administrator> Joins the winbind domain or ADS realm now as this administrator --enablewins Enable WINS for hostname resolution --disablewins Disable WINS for hostname resolution --enablehesiod Enable Hesiod --disablehesiod Disable Hesiod --hesiodlhs= <lhs> Specify Hesiod LHS --hesiodrhs= <rhs> Specify Hesiod RHS --enablecache Enable nscd --disablecache Disable nscd --nostart Do not start or stop the portmap , ypbind , or nscd services even if they are configured --kickstart Do not display the user interface --probe Probe and display network defaults
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Authentication_Configuration-Command_Line_Version
8.14. Troubleshooting Snapshots
8.14. Troubleshooting Snapshots Situation Snapshot creation fails. Step 1 Check if the bricks are thinly provisioned by following these steps: Execute the mount command and check the device name mounted on the brick path. For example: Run the following command to check if the device has a LV pool name. For example: If the Pool field is empty, then the brick is not thinly provisioned. Ensure that the brick is thinly provisioned, and retry the snapshot create command. Step 2 Check if the bricks are down by following these steps: Execute the following command to check the status of the volume: If any bricks are down, then start the bricks by executing the following command: To verify if the bricks are up, execute the following command: Retry the snapshot create command. Step 3 Check if the node is down by following these steps: Execute the following command to check the status of the nodes: If a brick is not listed in the status, then execute the following command: If the status of the node hosting the missing brick is Disconnected , then power-up the node. Retry the snapshot create command. Step 4 Check if rebalance is in progress by following these steps: Execute the following command to check the rebalance status: If rebalance is in progress, wait for it to finish. Retry the snapshot create command. Situation Snapshot delete fails. Step 1 Check if the server quorum is met by following these steps: Execute the following command to check the peer status: If nodes are down, and the cluster is not in quorum, then power up the nodes. To verify if the cluster is in quorum, execute the following command: Retry the snapshot delete command. Situation Snapshot delete command fails on some node(s) during commit phase, leaving the system inconsistent. Solution Identify the node(s) where the delete command failed. This information is available in the delete command's error output. For example: On the node where the delete command failed, bring down glusterd using the following command: On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Delete that particular snaps repository in /var/lib/glusterd/snaps/ from that node. For example: Start glusterd on that node using the following command: On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Repeat the 2nd, 3rd, and 4th steps on all the nodes where the commit failed as identified in the 1st step. Retry deleting the snapshot. For example: Situation Snapshot restore fails. Step 1 Check if the server quorum is met by following these steps: Execute the following command to check the peer status: If nodes are down, and the cluster is not in quorum, then power up the nodes. To verify if the cluster is in quorum, execute the following command: Retry the snapshot restore command. Step 2 Check if the volume is in Stop state by following these steps: Execute the following command to check the volume info: If the volume is in Started state, then stop the volume using the following command: Retry the snapshot restore command. Situation Snapshot commands fail. Step 1 Check if there is a mismatch in the operating versions by following these steps: Open the following file and check for the operating version: If the operating-version is lesser than 30000, then the snapshot commands are not supported in the version the cluster is operating on. Upgrade all nodes in the cluster to Red Hat Gluster Storage 3.2 or higher. Retry the snapshot command. Situation After rolling upgrade, snapshot feature does not work. Solution You must ensure to make the following changes on the cluster to enable snapshot: Restart the volume using the following commands. Restart glusterd services on all nodes. On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide
[ "mount /dev/mapper/snap_lvgrp-snap_lgvol on /rhgs/brick1 type xfs (rw) /dev/mapper/snap_lvgrp1-snap_lgvol1 on /rhgs/brick2 type xfs (rw)", "lvs device-name", "lvs -o pool_lv /dev/mapper/snap_lvgrp-snap_lgvol Pool snap_thnpool", "gluster volume status VOLNAME", "gluster volume start VOLNAME force", "gluster volume status VOLNAME", "gluster volume status VOLNAME", "gluster pool list", "gluster volume rebalance VOLNAME status", "gluster pool list", "gluster pool list", "gluster snapshot delete snapshot1 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: failed: Commit failed on 10.00.00.02. Please check log file for details. Snapshot command failed", "systemctl stop glusterd", "service glusterd stop", "rm -rf /var/lib/glusterd/snaps/snapshot1", "systemctl start glusterd", "service glusterd start.", "gluster snapshot delete snapshot1", "gluster pool list", "gluster pool list", "gluster volume info VOLNAME", "gluster volume stop VOLNAME", "/var/lib/glusterd/glusterd.info", "gluster volume stop VOLNAME gluster volume start VOLNAME", "systemctl restart glusterd", "service glusterd restart" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/troubleshooting_snapshots
Chapter 6. Network connections
Chapter 6. Network connections 6.1. Creating outgoing connections To connect to a remote server, pass connection options containing the host and port to the container.connect() method. Example: Creating outgoing connections container.on("connection_open", function (event) { console.log("Connection " + event.connection + " is open"); }); var opts = { host: "example.com", port: 5672 }; container.connect(opts); The default host is localhost . The default port is 5672. See the Chapter 7, Security section for information about creating secure connections. 6.2. Configuring reconnect Reconnect allows a client to recover from lost connections. It is used to ensure that the components in a distributed system reestablish communication after temporary network or component failures. AMQ JavaScript enables reconnect by default. If a connection attempt fails, the client will try again after a brief delay. The delay increases exponentially for each new attempt, up to a default maximum of 60 seconds. To disable reconnect, set the reconnect connection option to false . Example: Disabling reconnect var opts = { host: "example.com", reconnect: false }; container.connect(opts); To control the delays between connection attempts, set the initial_reconnect_delay and max_reconnect_delay connection options. Delay options are specified in milliseconds. To limit the number of reconnect attempts, set the reconnect_limit option. Example: Configuring reconnect var opts = { host: "example.com", initial_reconnect_delay: 100 , max_reconnect_delay: 60 * 1000 , reconnect_limit: 10 }; container.connect(opts); 6.3. Configuring failover AMQ JavaScript allows you to configure alternate connection endpoints programatically. To specify multiple connection endpoints, define a function that returns new connection options and pass the function in the connection_details option. The function is called once for each connection attempt. Example: Configuring failover var hosts = ["alpha.example.com", "beta.example.com"]; var index = -1; function failover_fn() { index += 1; if (index == hosts.length) index = 0; return {host: hosts[index].hostname}; }; var opts = { host: "example.com", connection_details: failover_fn } container.connect(opts); This example implements repeating round-robin failover for a list of hosts. You can use this interface to implement your own failover behavior. 6.4. Accepting incoming connections AMQ JavaScript can accept inbound network connections, enabling you to build custom messaging servers. To start listening for connections, use the container.listen() method with options containing the local host address and port to listen on. Example: Accepting incoming connections container.on("connection_open", function (event) { console.log("New incoming connection " + event.connection ); }); var opts = { host: "0.0.0.0", port: 5672 }; container.listen(opts); The special IP address 0.0.0.0 listens on all available IPv4 interfaces. To listen on all IPv6 interfaces, use [::0] . For more information, see the server receive.js example .
[ "container.on(\"connection_open\", function (event) { console.log(\"Connection \" + event.connection + \" is open\"); }); var opts = { host: \"example.com\", port: 5672 }; container.connect(opts);", "var opts = { host: \"example.com\", reconnect: false }; container.connect(opts);", "var opts = { host: \"example.com\", initial_reconnect_delay: 100 , max_reconnect_delay: 60 * 1000 , reconnect_limit: 10 }; container.connect(opts);", "var hosts = [\"alpha.example.com\", \"beta.example.com\"]; var index = -1; function failover_fn() { index += 1; if (index == hosts.length) index = 0; return {host: hosts[index].hostname}; }; var opts = { host: \"example.com\", connection_details: failover_fn } container.connect(opts);", "container.on(\"connection_open\", function (event) { console.log(\"New incoming connection \" + event.connection ); }); var opts = { host: \"0.0.0.0\", port: 5672 }; container.listen(opts);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/network_connections
Chapter 1. New and changed features
Chapter 1. New and changed features 1.1. AMQ Python AMQ Python is now supported on Windows with Python 3.8, in addition to Python 3.6. 1.2. AMQ JMS If an AMQ JMS connection is lost, more detailed logging is now produced when the connection is restored. 1.3. AMQ Spring Boot Starter AMQ Spring Boot Starter now supports Spring Boot 2.3. 1.4. AMQ Resource Adapter AMQ Resource Adapter now includes an example using the WildFly application server, replacing the Thorntail example.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_2.8_release_notes/new_and_changed_features
Chapter 1. Logging configuration
Chapter 1. Logging configuration Read about the use of logging API in Quarkus, configuring logging output, and using logging adapters to unify the output from other logging APIs. Quarkus uses the JBoss Log Manager logging backend for publishing application and framework logs. Quarkus supports the JBoss Logging API and multiple other logging APIs, seamlessly integrated with JBoss Log Manager. You can use any of the following APIs : JBoss Logging JDK java.util.logging (JUL) SLF4J Apache Commons Logging Apache Log4j 2 Apache Log4j 1 1.1. Use JBoss Logging for application logging When using the JBoss Logging API, your application requires no additional dependencies, as Quarkus automatically provides it. An example of using the JBoss Logging API to log a message: import org.jboss.logging.Logger; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/hello") public class ExampleResource { private static final Logger LOG = Logger.getLogger(ExampleResource.class); @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { LOG.info("Hello"); return "hello"; } } Note While JBoss Logging routes log messages into JBoss Log Manager directly, one of your libraries might rely on a different logging API. In such cases, you need to use a logging adapter to ensure that its log messages are routed to JBoss Log Manager as well. 1.2. Get an application logger In Quarkus, the most common ways to obtain an application logger are by: Declaring a logger field Simplified logging Injecting a configured logger 1.2.1. Declaring a logger field With this classic approach, you use a specific API to obtain a logger instance, store it in a static field of a class, and call logging operations upon this instance. The same flow can be applied with any of the supported logging APIs . An example of storing a logger instance into a static field by using the JBoss Logging API: package com.example; import org.jboss.logging.Logger; public class MyService { private static final Logger log = Logger.getLogger(MyService.class); 1 public void doSomething() { log.info("It works!"); 2 } } 1 Define the logger field. 2 Invoke the desired logging methods on the log object. 1.2.2. Simplified logging Quarkus simplifies logging by automatically adding logger fields to classes that use io.quarkus.logging.Log . This eliminates the need for repetitive boilerplate code and enhances logging setup convenience. An example of simplified logging using static method calls: package com.example; import io.quarkus.logging.Log; 1 class MyService { 2 public void doSomething() { Log.info("Simple!"); 3 } } 1 The io.quarkus.logging.Log class contains the same methods as JBoss Logging, except that they are static . 2 Note that the class does not declare a logger field. This is because during application build, a private static final org.jboss.logging.Logger field is created automatically in each class that uses the Log API. The fully qualified name of the class that calls the Log methods is used as a logger name. In this example, the logger name would be com.example.MyService . 3 Finally, all calls to Log methods are rewritten to regular JBoss Logging calls on the logger field during the application build. Warning Only use the Log API in application classes, not in external dependencies. Log method calls that are not processed by Quarkus at build time will throw an exception. 1.2.3. Injecting a configured logger The injection of a configured org.jboss.logging.Logger logger instance with the @Inject annotation is another alternative to adding an application logger, but is applicable only to CDI beans. You can use @Inject Logger log , where the logger gets named after the class you inject it to, or @Inject @LoggerName("... ") Logger log , where the logger will receive the specified name. Once injected, you can use the log object to invoke logging methods. An example of two different types of logger injection: package com.example; import org.jboss.logging.Logger; @ApplicationScoped class SimpleBean { @Inject Logger log; 1 @LoggerName("foo") Logger fooLog; 2 public void ping() { log.info("Simple!"); fooLog.info("Goes to _foo_ logger!"); } } 1 The FQCN of the declaring class is used as a logger name, for example, org.jboss.logging.Logger.getLogger(SimpleBean.class) will be used. 2 In this case, the name foo is used as a logger name, for example, org.jboss.logging.Logger.getLogger("foo") will be used. Note The logger instances are cached internally. Therefore, when a logger is injected, for example, into a @RequestScoped bean, it is shared for all bean instances to avoid possible performance penalties associated with logger instantiation. 1.3. Use log levels Quarkus provides different log levels, which helps developers control the amount of information logged based on the severity of the events. Table 1.1. Log levels used by Quarkus OFF A special level to use in configuration in order to turn off logging. FATAL A critical service failure or complete inability to service requests of any kind. ERROR A significant disruption in a request or the inability to service a request. WARN A non-critical service error or problem that may not require immediate correction. INFO Service lifecycle events or important related very low-frequency information. DEBUG Messages that convey extra information regarding lifecycle or non-request-bound events, useful for debugging. TRACE Messages that convey extra per-request debugging information that may be very high frequency. ALL A special level to use in configuration to turn on logging for all messages, including custom levels. You can also configure the following levels for applications and libraries that use java.util.logging : SEVERE Same as ERROR . WARNING Same as WARN . CONFIG Service configuration information. FINE Same as DEBUG . FINER Same as TRACE . FINEST Increased debug output compared to TRACE , which might have a higher frequency. Table 1.2. The mapping between the levels Numerical level value Standard level name Equivalent java.util.logging (JUL) level name 1100 FATAL Not applicable 1000 ERROR SEVERE 900 WARN WARNING 800 INFO INFO 700 Not applicable CONFIG 500 DEBUG FINE 400 TRACE FINER 300 Not applicable FINEST 1.4. Configure the log level, category, and format JBoss Logging, integrated into Quarkus, offers a unified configuration for all supported logging APIs through a single configuration file that sets up all available extensions. To adjust runtime logging, modify the application.properties file. An example of how you can set the default log level to INFO logging and include Hibernate DEBUG logs: quarkus.log.level=INFO quarkus.log.category."org.hibernate".level=DEBUG When you set the log level to below DEBUG , you must also adjust the minimum log level. This setting is either global, using the quarkus.log.min-level configuration property, or per category: quarkus.log.category."org.hibernate".min-level=TRACE This sets a floor level for which Quarkus needs to generate supporting code. The minimum log level must be set at build time so that Quarkus can open the door to optimization opportunities where logging on unusable levels can be elided. An example from native execution: Setting INFO as the minimum logging level sets lower-level checks, such as isTraceEnabled , to false . This identifies code like if(logger.isDebug()) callMethod(); that will never be executed and mark it as "dead." Warning If you add these properties on the command line, ensure the " character is escaped properly: All potential properties are listed in the logging configuration reference section. 1.4.1. Logging categories Logging is configured on a per-category basis, with each category being configured independently. Configuration for a category applies recursively to all subcategories unless there is a more specific subcategory configuration. The parent of all logging categories is called the "root category." As the ultimate parent, this category might contain a configuration that applies globally to all other categories. This includes the globally configured handlers and formatters. Example 1.1. An example of a global configuration that applies to all categories: quarkus.log.handlers=console,mylog In this example, the root category is configured to use two handlers: console and mylog . Example 1.2. An example of a per-category configuration: quarkus.log.category."org.apache.kafka.clients".level=INFO quarkus.log.category."org.apache.kafka.common.utils".level=INFO This example shows how you can configure the minimal log level on the categories org.apache.kafka.clients and org.apache.kafka.common.utils . For more information, see Logging configuration reference . If you want to configure something extra for a specific category, create a named handler like quarkus.log.handler.[console|file|syslog].<your-handler-name>.* and set it up for that category by using quarkus.log.category.<my-category>.handlers . An example use case can be a desire to use a different timestamp format for log messages which are saved to a file than the format used for other handlers. For further demonstration, see the outputs of the Attaching named handlers to a category example. Property Name Default Description quarkus.log.category."<category-name>".level INFO [a] The level to use to configure the category named <category-name> . The quotes are necessary. quarkus.log.category."<category-name>".min-level DEBUG The minimum logging level to use to configure the category named <category-name> . The quotes are necessary. quarkus.log.category."<category-name>".use-parent-handlers true Specify whether this logger should send its output to its parent logger. quarkus.log.category."<category-name>".handlers=[<handler>] empty [b] The names of the handlers that you want to attach to a specific category. [a] Some extensions may define customized default log levels for certain categories, in order to reduce log noise by default. Setting the log level in configuration will override any extension-defined log levels. [b] By default, the configured category gets the same handlers attached as the one on the root logger. Note The . symbol separates the specific parts in the configuration property. The quotes in the property name are used as a required escape to keep category specifications, such as quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE , intact. 1.4.2. Root logger configuration The root logger category is handled separately, and is configured by using the following properties: Property Name Default Description quarkus.log.level INFO The default log level for every log category. quarkus.log.min-level DEBUG The default minimum log level for every log category. The parent category is examined if no level configuration exists for a given logger category. The root logger configuration is used if no specific configurations are provided for the category and any of its parent categories. Note Although the root logger's handlers are usually configured directly via quarkus.log.console , quarkus.log.file and quarkus.log.syslog , it can nonetheless have additional named handlers attached to it using the quarkus.log.handlers property. 1.5. Logging format Quarkus uses a pattern-based logging formatter that generates human-readable text logs by default, but you can also configure the format for each log handler by using a dedicated property. For the console handler, the property is quarkus.log.console.format . The logging format string supports the following symbols: Symbol Summary Description %% % Renders a simple % character. %c Category Renders the category name. %C Source class Renders the source class name. [a] %d{xxx} Date Renders a date with the given date format string, which uses the syntax defined by java.text.SimpleDateFormat . %e Exception Renders the thrown exception, if any. %F Source file Renders the source file name. [a] %h Host name Renders the system simple host name. %H Qualified host name Renders the system's fully qualified host name, which may be the same as the simple host name, depending on operating system configuration. %i Process ID Render the current process PID. %l Source location Renders the source location information, which includes source file name, line number, class name, and method name. [a] %L Source line Renders the source line number. [a] %m Full Message Renders the log message plus exception (if any). %M Source method Renders the source method name. [a] %n Newline Renders the platform-specific line separator string. %N Process name Render the name of the current process. %p Level Render the log level of the message. %r Relative time Render the time in milliseconds since the start of the application log. %s Simple message Renders just the log message, with no exception trace. %t Thread name Render the thread name. %t{id} Thread ID Render the thread ID. %z{<zone name>} Time zone Set the time zone of the output to <zone name> . %X{<MDC property name>} Mapped Diagnostic Context Value Renders the value from Mapped Diagnostic Context. %X Mapped Diagnostic Context Values Renders all the values from Mapped Diagnostic Context in format {property.key=property.value} . %x Nested Diagnostics context values Renders all the values from Nested Diagnostics Context in format {value1.value2} . [a] Format sequences which examine caller information may affect performance 1.5.1. Alternative console logging formats Changing the console log format is useful, for example, when the console output of the Quarkus application is captured by a service that processes and stores the log information for later analysis. 1.5.1.1. JSON logging format The quarkus-logging-json extension may be employed to add support for the JSON logging format and its related configuration. Add this extension to your build file as the following snippet illustrates: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-logging-json</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-logging-json") By default, the presence of this extension replaces the output format configuration from the console configuration, and the format string and the color settings (if any) are ignored. The other console configuration items, including those controlling asynchronous logging and the log level, will continue to be applied. For some, it will make sense to use humanly readable (unstructured) logging in dev mode and JSON logging (structured) in production mode. This can be achieved using different profiles, as shown in the following configuration. Disable JSON logging in application.properties for dev and test mode: %dev.quarkus.log.console.json=false %test.quarkus.log.console.json=false 1.5.1.1.1. Configuration Configure the JSON logging extension using supported properties to customize its behavior. Configuration property fixed at build time - All other configuration properties are overridable at runtime Console logging Type Default quarkus.log.console.json Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. Environment variable: QUARKUS_LOG_CONSOLE_JSON boolean true quarkus.log.console.json.pretty-print Enable "pretty printing" of the JSON record. Note that some JSON parsers will fail to read the pretty printed output. Environment variable: QUARKUS_LOG_CONSOLE_JSON_PRETTY_PRINT boolean false quarkus.log.console.json.date-format The date format to use. The special string "default" indicates that the default format should be used. Environment variable: QUARKUS_LOG_CONSOLE_JSON_DATE_FORMAT string default quarkus.log.console.json.record-delimiter The special end-of-record delimiter to be used. By default, newline is used. Environment variable: QUARKUS_LOG_CONSOLE_JSON_RECORD_DELIMITER string quarkus.log.console.json.zone-id The zone ID to use. The special string "default" indicates that the default zone should be used. Environment variable: QUARKUS_LOG_CONSOLE_JSON_ZONE_ID string default quarkus.log.console.json.exception-output-type The exception output type to specify. Environment variable: QUARKUS_LOG_CONSOLE_JSON_EXCEPTION_OUTPUT_TYPE detailed , formatted , detailed-and-formatted detailed quarkus.log.console.json.print-details Enable printing of more details in the log. Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. Environment variable: QUARKUS_LOG_CONSOLE_JSON_PRINT_DETAILS boolean false quarkus.log.console.json.key-overrides Override keys with custom values. Omitting this value indicates that no key overrides will be applied. Environment variable: QUARKUS_LOG_CONSOLE_JSON_KEY_OVERRIDES string quarkus.log.console.json.excluded-keys Keys to be excluded from the JSON output. Environment variable: QUARKUS_LOG_CONSOLE_JSON_EXCLUDED_KEYS list of string quarkus.log.console.json.additional-field."field-name".value Additional field value. Environment variable: QUARKUS_LOG_CONSOLE_JSON_ADDITIONAL_FIELD__FIELD_NAME__VALUE string required quarkus.log.console.json.additional-field."field-name".type Additional field type specification. Supported types: string , int , and long . String is the default if not specified. Environment variable: QUARKUS_LOG_CONSOLE_JSON_ADDITIONAL_FIELD__FIELD_NAME__TYPE string , int , long string File logging Type Default quarkus.log.file.json Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. Environment variable: QUARKUS_LOG_FILE_JSON boolean true quarkus.log.file.json.pretty-print Enable "pretty printing" of the JSON record. Note that some JSON parsers will fail to read the pretty printed output. Environment variable: QUARKUS_LOG_FILE_JSON_PRETTY_PRINT boolean false quarkus.log.file.json.date-format The date format to use. The special string "default" indicates that the default format should be used. Environment variable: QUARKUS_LOG_FILE_JSON_DATE_FORMAT string default quarkus.log.file.json.record-delimiter The special end-of-record delimiter to be used. By default, newline is used. Environment variable: QUARKUS_LOG_FILE_JSON_RECORD_DELIMITER string quarkus.log.file.json.zone-id The zone ID to use. The special string "default" indicates that the default zone should be used. Environment variable: QUARKUS_LOG_FILE_JSON_ZONE_ID string default quarkus.log.file.json.exception-output-type The exception output type to specify. Environment variable: QUARKUS_LOG_FILE_JSON_EXCEPTION_OUTPUT_TYPE detailed , formatted , detailed-and-formatted detailed quarkus.log.file.json.print-details Enable printing of more details in the log. Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. Environment variable: QUARKUS_LOG_FILE_JSON_PRINT_DETAILS boolean false quarkus.log.file.json.key-overrides Override keys with custom values. Omitting this value indicates that no key overrides will be applied. Environment variable: QUARKUS_LOG_FILE_JSON_KEY_OVERRIDES string quarkus.log.file.json.excluded-keys Keys to be excluded from the JSON output. Environment variable: QUARKUS_LOG_FILE_JSON_EXCLUDED_KEYS list of string quarkus.log.file.json.additional-field."field-name".value Additional field value. Environment variable: QUARKUS_LOG_FILE_JSON_ADDITIONAL_FIELD__FIELD_NAME__VALUE string required quarkus.log.file.json.additional-field."field-name".type Additional field type specification. Supported types: string , int , and long . String is the default if not specified. Environment variable: QUARKUS_LOG_FILE_JSON_ADDITIONAL_FIELD__FIELD_NAME__TYPE string , int , long string Syslog logging Type Default quarkus.log.syslog.json Determine whether to enable the JSON console formatting extension, which disables "normal" console formatting. Environment variable: QUARKUS_LOG_SYSLOG_JSON boolean true quarkus.log.syslog.json.pretty-print Enable "pretty printing" of the JSON record. Note that some JSON parsers will fail to read the pretty printed output. Environment variable: QUARKUS_LOG_SYSLOG_JSON_PRETTY_PRINT boolean false quarkus.log.syslog.json.date-format The date format to use. The special string "default" indicates that the default format should be used. Environment variable: QUARKUS_LOG_SYSLOG_JSON_DATE_FORMAT string default quarkus.log.syslog.json.record-delimiter The special end-of-record delimiter to be used. By default, newline is used. Environment variable: QUARKUS_LOG_SYSLOG_JSON_RECORD_DELIMITER string quarkus.log.syslog.json.zone-id The zone ID to use. The special string "default" indicates that the default zone should be used. Environment variable: QUARKUS_LOG_SYSLOG_JSON_ZONE_ID string default quarkus.log.syslog.json.exception-output-type The exception output type to specify. Environment variable: QUARKUS_LOG_SYSLOG_JSON_EXCEPTION_OUTPUT_TYPE detailed , formatted , detailed-and-formatted detailed quarkus.log.syslog.json.print-details Enable printing of more details in the log. Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. Environment variable: QUARKUS_LOG_SYSLOG_JSON_PRINT_DETAILS boolean false quarkus.log.syslog.json.key-overrides Override keys with custom values. Omitting this value indicates that no key overrides will be applied. Environment variable: QUARKUS_LOG_SYSLOG_JSON_KEY_OVERRIDES string quarkus.log.syslog.json.excluded-keys Keys to be excluded from the JSON output. Environment variable: QUARKUS_LOG_SYSLOG_JSON_EXCLUDED_KEYS list of string quarkus.log.syslog.json.additional-field."field-name".value Additional field value. Environment variable: QUARKUS_LOG_SYSLOG_JSON_ADDITIONAL_FIELD__FIELD_NAME__VALUE string required quarkus.log.syslog.json.additional-field."field-name".type Additional field type specification. Supported types: string , int , and long . String is the default if not specified. Environment variable: QUARKUS_LOG_SYSLOG_JSON_ADDITIONAL_FIELD__FIELD_NAME__TYPE string , int , long string Warning Enabling pretty printing might cause certain processors and JSON parsers to fail. Note Printing the details can be expensive as the values are retrieved from the caller. The details include the source class name, source file name, source method name, and source line number. 1.6. Log handlers A log handler is a logging component responsible for the emission of log events to a recipient. Quarkus includes several different log handlers: console , file , and syslog . The featured examples use com.example as a logging category. 1.6.1. Console log handler The console log handler is enabled by default, and it directs all log events to the application's console, usually the system's stdout . A global configuration example: quarkus.log.console.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n A per-category configuration example: quarkus.log.handler.console.my-console-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category."com.example".handlers=my-console-handler quarkus.log.category."com.example".use-parent-handlers=false For details about its configuration, see the console logging configuration reference. 1.6.2. File log handler To log events to a file on the application's host, use the Quarkus file log handler. The file log handler is disabled by default, so you must first enable it. The Quarkus file log handler supports log file rotation. Log file rotation ensures effective log file management over time by maintaining a specified number of backup log files, while keeping the primary log file up-to-date and manageable. A global configuration example: quarkus.log.file.enable=true quarkus.log.file.path=application.log quarkus.log.file.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n A per-category configuration example: quarkus.log.handler.file.my-file-handler.enable=true quarkus.log.handler.file.my-file-handler.path=application.log quarkus.log.handler.file.my-file-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category."com.example".handlers=my-file-handler quarkus.log.category."com.example".use-parent-handlers=false For details about its configuration, see the file logging configuration reference. 1.6.3. Syslog log handler The syslog handler in Quarkus follows the Syslog protocol, which is used to send log messages on UNIX-like systems. It utilizes the protocol defined in RFC 5424 . By default, the syslog handler is disabled. When enabled, it sends all log events to a syslog server, typically the local syslog server for the application. A global configuration example: quarkus.log.syslog.enable=true quarkus.log.syslog.app-name=my-application quarkus.log.syslog.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n A per-category configuration example: quarkus.log.handler.syslog.my-syslog-handler.enable=true quarkus.log.handler.syslog.my-syslog-handler.app-name=my-application quarkus.log.handler.syslog.my-syslog-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category."com.example".handlers=my-syslog-handler quarkus.log.category."com.example".use-parent-handlers=false For details about its configuration, see the Syslog logging configuration reference. 1.7. Add a logging filter to your log handler Log handlers, such as the console log handler, can be linked with a filter that determines whether a log record should be logged. To register a logging filter: Annotate a final class that implements java.util.logging.Filter with @io.quarkus.logging.LoggingFilter , and set the name property: An example of writing a filter: package com.example; import io.quarkus.logging.LoggingFilter; import java.util.logging.Filter; import java.util.logging.LogRecord; @LoggingFilter(name = "my-filter") public final class TestFilter implements Filter { private final String part; public TestFilter(@ConfigProperty(name = "my-filter.part") String part) { this.part = part; } @Override public boolean isLoggable(LogRecord record) { return !record.getMessage().contains(part); } } In this example, we exclude log records containing specific text from console logs. The specific text to filter on is not hard-coded; instead, it is read from the my-filter.part configuration property. An example of Configuring the filter in application.properties : my-filter.part=TEST Attach the filter to the corresponding handler using the filter configuration property, located in application.properties : quarkus.log.console.filter=my-filter 1.8. Examples of logging configurations The following examples show some of the ways in which you can configure logging in Quarkus: Console DEBUG logging except for Quarkus logs (INFO), no color, shortened time, shortened category prefixes quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n quarkus.log.console.level=DEBUG quarkus.console.color=false quarkus.log.category."io.quarkus".level=INFO Note If you add these properties in the command line, ensure " is escaped. For example, -Dquarkus.log.category.\"io.quarkus\".level=DEBUG . File TRACE logging configuration quarkus.log.file.enable=true # Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.file.level=TRACE quarkus.log.file.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n # Set 2 categories (io.quarkus.smallrye.jwt, io.undertow.request.security) to TRACE level quarkus.log.min-level=TRACE quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE quarkus.log.category."io.undertow.request.security".level=TRACE Note As we don't change the root logger, the console log will only contain INFO or higher level logs. Named handlers attached to a category # Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n # Configure a named handler that logs to console quarkus.log.handler.console."STRUCTURED_LOGGING".format=%e%n # Configure a named handler that logs to file quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".enable=true quarkus.log.handler.file."STRUCTURED_LOGGING_FILE".format=%e%n # Configure the category and link the two named handlers to it quarkus.log.category."io.quarkus.category".level=INFO quarkus.log.category."io.quarkus.category".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE Named handlers attached to the root logger # configure a named file handler that sends the output to 'quarkus.log' quarkus.log.handler.file.CONSOLE_MIRROR.enable=true quarkus.log.handler.file.CONSOLE_MIRROR.path=quarkus.log # attach the handler to the root logger quarkus.log.handlers=CONSOLE_MIRROR 1.9. Centralized log management Use a centralized location to efficiently collect, store, and analyze log data from various components and instances of the application. To send logs to a centralized tool such as Graylog, Logstash, or Fluentd, see the Quarkus Centralized log management guide. 1.10. Configure logging for @QuarkusTest Enable proper logging for @QuarkusTest by setting the java.util.logging.manager system property to org.jboss.logmanager.LogManager . The system property must be set early on to be effective, so it is recommended to configure it in the build system. Setting the java.util.logging.manager system property in the Maven Surefire plugin configuration <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> 1 <quarkus.log.level>DEBUG</quarkus.log.level> 2 <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> 1 Make sure the org.jboss.logmanager.LogManager is used. 2 Enable debug logging for all logging categories. For Gradle, add the following configuration to the build.gradle file: test { systemProperty "java.util.logging.manager", "org.jboss.logmanager.LogManager" } See also Running @QuarkusTest from an IDE . 1.11. Use other logging APIs Quarkus relies on the JBoss Logging library for all the logging requirements. Suppose you use libraries that depend on other logging libraries, such as Apache Commons Logging, Log4j, or SLF4J. In that case, exclude them from the dependencies and use one of the JBoss Logging adapters. This is especially important when building native executables, as you could encounter issues similar to the following when compiling the native executable: The logging implementation is not included in the native executable, but you can resolve this issue using JBoss Logging adapters. These adapters are available for popular open-source logging components, as explained in the chapter. 1.11.1. Add a logging adapter to your application For each logging API that is not jboss-logging : Add a logging adapter library to ensure that messages logged through these APIs are routed to the JBoss Log Manager backend. Note This step is unnecessary for libraries that are dependencies of a Quarkus extension where the extension handles it automatically. Apache Commons Logging: Using Maven: <dependency> <groupId>org.jboss.logging</groupId> <artifactId>commons-logging-jboss-logging</artifactId> </dependency> Using Gradle: implementation("org.jboss.logging:commons-logging-jboss-logging") Log4j: Using Maven: <dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j-jboss-logmanager</artifactId> </dependency> Using Gradle: implementation("org.jboss.logmanager:log4j-jboss-logmanager") Log4j 2: Using Maven: <dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j2-jboss-logmanager</artifactId> </dependency> Using Gradle: implementation("org.jboss.logmanager:log4j2-jboss-logmanager") Note Do not include any Log4j dependencies because the log4j2-jboss-logmanager library contains all that is needed to use Log4j as a logging implementation. SLF4J: Using Maven: <dependency> <groupId>org.jboss.slf4j</groupId> <artifactId>slf4j-jboss-logmanager</artifactId> </dependency> Using Gradle: implementation("org.jboss.slf4j:slf4j-jboss-logmanager") Verify whether the logs generated by the added library adhere to the same format as the other Quarkus logs. 1.11.2. Use MDC to add contextual log information Quarkus overrides the logging Mapped Diagnostic Context (MDC) to improve the compatibility with its reactive core. 1.11.2.1. Add and read MDC data To add data to the MDC and extract it in your log output: Use the MDC class to set the data. Customize the log format to use %X{mdc-key} . Let's consider the following code: Example with JBoss Logging and io.quarkus.logging.Log package me.sample; import io.quarkus.logging.Log; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.logmanager.MDC; import java.util.UUID; @Path("/hello/jboss") public class GreetingResourceJbossLogging { @GET @Path("/test") public String greeting() { MDC.put("request.id", UUID.randomUUID().toString()); MDC.put("request.path", "/hello/test"); Log.info("request received"); return "hello world!"; } } If you configure the log format with the following line: quarkus.log.console.format=%d{HH:mm:ss} %-5p request.id=%X{request.id} request.path=%X{request.path} [%c{2.}] (%t) %s%n You get messages containing the MDC data: 08:48:13 INFO request.id=c37a3a36-b7f6-4492-83a1-de41dbc26fe2 request.path=/hello/test [me.sa.GreetingResourceJbossLogging] (executor-thread-1) request received 1.11.2.2. MDC and supported logging APIs Depending on the API you use, the MDC class is slightly different. However, the APIs are very similar: Log4j 1 - org.apache.log4j.MDC.put(key, value) Log4j 2 - org.apache.logging.log4j.ThreadContext.put(key, value) SLF4J - org.slf4j.MDC.put(key, value) 1.11.2.3. MDC propagation In Quarkus, the MDC provider has a specific implementation for handling the reactive context, ensuring that MDC data is propagated during reactive and asynchronous processing. As a result, you can still access the MDC data in various scenarios: After asynchronous calls, for example, when a REST client returns a Uni. In code submitted to org.eclipse.microprofile.context.ManagedExecutor . In code executed with vertx.executeBlocking() . Note If applicable, MDC data is stored in a duplicated context , which is an isolated context for processing a single task (request). 1.12. Logging configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.log.metrics.enabled If enabled and a metrics extension is present, logging metrics are published. Environment variable: QUARKUS_LOG_METRICS_ENABLED boolean false quarkus.log.min-level The default minimum log level. Environment variable: QUARKUS_LOG_MIN_LEVEL Level DEBUG Minimum logging categories Type Default quarkus.log.category."categories".min-level The minimum log level for this category. By default, all categories are configured with DEBUG minimum level. To get runtime logging below DEBUG , e.g., TRACE , adjust the minimum level at build time. The right log level needs to be provided at runtime. As an example, to get TRACE logging, minimum level needs to be at TRACE , and the runtime log level needs to match that. Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__MIN_LEVEL InheritableLevel inherit Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.log.level The log level of the root category, which is used as the default log level for all categories. JBoss Logging supports Apache-style log levels: {@link org.jboss.logmanager.Level#FATAL} {@link org.jboss.logmanager.Level#ERROR} {@link org.jboss.logmanager.Level#WARN} {@link org.jboss.logmanager.Level#INFO} {@link org.jboss.logmanager.Level#DEBUG} {@link org.jboss.logmanager.Level#TRACE} In addition, it also supports the standard JDK log levels. Environment variable: QUARKUS_LOG_LEVEL Level INFO quarkus.log.handlers The names of additional handlers to link to the root category. These handlers are defined in consoleHandlers, fileHandlers, or syslogHandlers. Environment variable: QUARKUS_LOG_HANDLERS list of string Console logging Type Default quarkus.log.console.enable If console logging should be enabled Environment variable: QUARKUS_LOG_CONSOLE_ENABLE boolean true quarkus.log.console.stderr If console logging should go to System#err instead of System#out . Environment variable: QUARKUS_LOG_CONSOLE_STDERR boolean false quarkus.log.console.format The log format. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_CONSOLE_FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.console.level The console log level. Environment variable: QUARKUS_LOG_CONSOLE_LEVEL Level ALL quarkus.log.console.darken Specify how much the colors should be darkened. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_CONSOLE_DARKEN int 0 quarkus.log.console.filter The name of the filter to link to the console handler. Environment variable: QUARKUS_LOG_CONSOLE_FILTER string quarkus.log.console.async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_CONSOLE_ASYNC boolean false quarkus.log.console.async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_CONSOLE_ASYNC_QUEUE_LENGTH int 512 quarkus.log.console.async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_CONSOLE_ASYNC_OVERFLOW block , discard block File logging Type Default quarkus.log.file.enable If file logging should be enabled Environment variable: QUARKUS_LOG_FILE_ENABLE boolean false quarkus.log.file.format The log format Environment variable: QUARKUS_LOG_FILE_FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %h %N[%i] %-5p [%c{3.}] (%t) %s%e%n quarkus.log.file.level The level of logs to be written into the file. Environment variable: QUARKUS_LOG_FILE_LEVEL Level ALL quarkus.log.file.path The name of the file in which logs will be written. Environment variable: QUARKUS_LOG_FILE_PATH File quarkus.log quarkus.log.file.filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_FILE_FILTER string quarkus.log.file.encoding The character encoding used Environment variable: QUARKUS_LOG_FILE_ENCODING Charset quarkus.log.file.async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_FILE_ASYNC boolean false quarkus.log.file.async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_FILE_ASYNC_QUEUE_LENGTH int 512 quarkus.log.file.async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_FILE_ASYNC_OVERFLOW block , discard block quarkus.log.file.rotation.max-file-size The maximum log file size, after which a rotation is executed. Environment variable: QUARKUS_LOG_FILE_ROTATION_MAX_FILE_SIZE MemorySize 10M quarkus.log.file.rotation.max-backup-index The maximum number of backups to keep. Environment variable: QUARKUS_LOG_FILE_ROTATION_MAX_BACKUP_INDEX int 5 quarkus.log.file.rotation.file-suffix The file handler rotation file suffix. When used, the file will be rotated based on its suffix. Example fileSuffix: .yyyy-MM-dd Note: If the suffix ends with .zip or .gz, the rotation file will also be compressed. Environment variable: QUARKUS_LOG_FILE_ROTATION_FILE_SUFFIX string quarkus.log.file.rotation.rotate-on-boot Indicates whether to rotate log files on server initialization. You need to either set a max-file-size or configure a file-suffix for it to work. Environment variable: QUARKUS_LOG_FILE_ROTATION_ROTATE_ON_BOOT boolean true Syslog logging Type Default quarkus.log.syslog.enable If syslog logging should be enabled Environment variable: QUARKUS_LOG_SYSLOG_ENABLE boolean false quarkus.log.syslog.endpoint The IP address and port of the Syslog server Environment variable: QUARKUS_LOG_SYSLOG_ENDPOINT host:port localhost:514 quarkus.log.syslog.app-name The app name used when formatting the message in RFC5424 format Environment variable: QUARKUS_LOG_SYSLOG_APP_NAME string quarkus.log.syslog.hostname The name of the host the messages are being sent from Environment variable: QUARKUS_LOG_SYSLOG_HOSTNAME string quarkus.log.syslog.facility Sets the facility used when calculating the priority of the message as defined by RFC-5424 and RFC-3164 Environment variable: QUARKUS_LOG_SYSLOG_FACILITY kernel , user-level , mail-system , system-daemons , security , syslogd , line-printer , network-news , uucp , clock-daemon , security2 , ftp-daemon , ntp , log-audit , log-alert , clock-daemon2 , local-use-0 , local-use-1 , local-use-2 , local-use-3 , local-use-4 , local-use-5 , local-use-6 , local-use-7 user-level quarkus.log.syslog.syslog-type Set the SyslogType syslog type this handler should use to format the message sent Environment variable: QUARKUS_LOG_SYSLOG_SYSLOG_TYPE rfc5424 , rfc3164 rfc5424 quarkus.log.syslog.protocol Sets the protocol used to connect to the Syslog server Environment variable: QUARKUS_LOG_SYSLOG_PROTOCOL tcp , udp , ssl-tcp tcp quarkus.log.syslog.use-counting-framing If enabled, the message being sent is prefixed with the size of the message Environment variable: QUARKUS_LOG_SYSLOG_USE_COUNTING_FRAMING boolean false quarkus.log.syslog.truncate Set to true to truncate the message if it exceeds maximum length Environment variable: QUARKUS_LOG_SYSLOG_TRUNCATE boolean true quarkus.log.syslog.block-on-reconnect Enables or disables blocking when attempting to reconnect a org.jboss.logmanager.handlers.SyslogHandler.Protocol#TCP TCP or org.jboss.logmanager.handlers.SyslogHandler.Protocol#SSL_TCP SSL TCP protocol Environment variable: QUARKUS_LOG_SYSLOG_BLOCK_ON_RECONNECT boolean false quarkus.log.syslog.format The log message format Environment variable: QUARKUS_LOG_SYSLOG_FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.syslog.level The log level specifying what message levels will be logged by the Syslog logger Environment variable: QUARKUS_LOG_SYSLOG_LEVEL Level ALL quarkus.log.syslog.filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_SYSLOG_FILTER string quarkus.log.syslog.max-length The maximum length, in bytes, of the message allowed to be sent. The length includes the header and the message. If not set, the default value is 2048 when sys-log-type is rfc5424 (which is the default) and 1024 when sys-log-type is rfc3164 Environment variable: QUARKUS_LOG_SYSLOG_MAX_LENGTH MemorySize quarkus.log.syslog.async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_SYSLOG_ASYNC boolean false quarkus.log.syslog.async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_SYSLOG_ASYNC_QUEUE_LENGTH int 512 quarkus.log.syslog.async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_SYSLOG_ASYNC_OVERFLOW block , discard block Logging categories Type Default quarkus.log.category."categories".level The log level for this category. Note that to get log levels below INFO , the minimum level build-time configuration option also needs to be adjusted. Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__LEVEL InheritableLevel inherit quarkus.log.category."categories".handlers The names of the handlers to link to this category. Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__HANDLERS list of string quarkus.log.category."categories".use-parent-handlers Specify whether this logger should send its output to its parent Logger Environment variable: QUARKUS_LOG_CATEGORY__CATEGORIES__USE_PARENT_HANDLERS boolean true Console handlers Type Default quarkus.log.handler.console."console-handlers".enable If console logging should be enabled Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ENABLE boolean true quarkus.log.handler.console."console-handlers".stderr If console logging should go to System#err instead of System#out . Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__STDERR boolean false quarkus.log.handler.console."console-handlers".format The log format. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.handler.console."console-handlers".level The console log level. Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__LEVEL Level ALL quarkus.log.handler.console."console-handlers".darken Specify how much the colors should be darkened. Note that this value is ignored if an extension is present that takes control of console formatting (e.g., an XML or JSON-format extension). Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__DARKEN int 0 quarkus.log.handler.console."console-handlers".filter The name of the filter to link to the console handler. Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__FILTER string quarkus.log.handler.console."console-handlers".async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ASYNC boolean false quarkus.log.handler.console."console-handlers".async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ASYNC_QUEUE_LENGTH int 512 quarkus.log.handler.console."console-handlers".async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_HANDLER_CONSOLE__CONSOLE_HANDLERS__ASYNC_OVERFLOW block , discard block File handlers Type Default quarkus.log.handler.file."file-handlers".enable If file logging should be enabled Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ENABLE boolean false quarkus.log.handler.file."file-handlers".format The log format Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %h %N[%i] %-5p [%c{3.}] (%t) %s%e%n quarkus.log.handler.file."file-handlers".level The level of logs to be written into the file. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__LEVEL Level ALL quarkus.log.handler.file."file-handlers".path The name of the file in which logs will be written. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__PATH File quarkus.log quarkus.log.handler.file."file-handlers".filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__FILTER string quarkus.log.handler.file."file-handlers".encoding The character encoding used Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ENCODING Charset quarkus.log.handler.file."file-handlers".async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ASYNC boolean false quarkus.log.handler.file."file-handlers".async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ASYNC_QUEUE_LENGTH int 512 quarkus.log.handler.file."file-handlers".async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ASYNC_OVERFLOW block , discard block quarkus.log.handler.file."file-handlers".rotation.max-file-size The maximum log file size, after which a rotation is executed. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_MAX_FILE_SIZE MemorySize 10M quarkus.log.handler.file."file-handlers".rotation.max-backup-index The maximum number of backups to keep. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_MAX_BACKUP_INDEX int 5 quarkus.log.handler.file."file-handlers".rotation.file-suffix The file handler rotation file suffix. When used, the file will be rotated based on its suffix. Example fileSuffix: .yyyy-MM-dd Note: If the suffix ends with .zip or .gz, the rotation file will also be compressed. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_FILE_SUFFIX string quarkus.log.handler.file."file-handlers".rotation.rotate-on-boot Indicates whether to rotate log files on server initialization. You need to either set a max-file-size or configure a file-suffix for it to work. Environment variable: QUARKUS_LOG_HANDLER_FILE__FILE_HANDLERS__ROTATION_ROTATE_ON_BOOT boolean true Syslog handlers Type Default quarkus.log.handler.syslog."syslog-handlers".enable If syslog logging should be enabled Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ENABLE boolean false quarkus.log.handler.syslog."syslog-handlers".endpoint The IP address and port of the Syslog server Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ENDPOINT host:port localhost:514 quarkus.log.handler.syslog."syslog-handlers".app-name The app name used when formatting the message in RFC5424 format Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__APP_NAME string quarkus.log.handler.syslog."syslog-handlers".hostname The name of the host the messages are being sent from Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__HOSTNAME string quarkus.log.handler.syslog."syslog-handlers".facility Sets the facility used when calculating the priority of the message as defined by RFC-5424 and RFC-3164 Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__FACILITY kernel , user-level , mail-system , system-daemons , security , syslogd , line-printer , network-news , uucp , clock-daemon , security2 , ftp-daemon , ntp , log-audit , log-alert , clock-daemon2 , local-use-0 , local-use-1 , local-use-2 , local-use-3 , local-use-4 , local-use-5 , local-use-6 , local-use-7 user-level quarkus.log.handler.syslog."syslog-handlers".syslog-type Set the SyslogType syslog type this handler should use to format the message sent Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__SYSLOG_TYPE rfc5424 , rfc3164 rfc5424 quarkus.log.handler.syslog."syslog-handlers".protocol Sets the protocol used to connect to the Syslog server Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__PROTOCOL tcp , udp , ssl-tcp tcp quarkus.log.handler.syslog."syslog-handlers".use-counting-framing If enabled, the message being sent is prefixed with the size of the message Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__USE_COUNTING_FRAMING boolean false quarkus.log.handler.syslog."syslog-handlers".truncate Set to true to truncate the message if it exceeds maximum length Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__TRUNCATE boolean true quarkus.log.handler.syslog."syslog-handlers".block-on-reconnect Enables or disables blocking when attempting to reconnect a org.jboss.logmanager.handlers.SyslogHandler.Protocol#TCP TCP or org.jboss.logmanager.handlers.SyslogHandler.Protocol#SSL_TCP SSL TCP protocol Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__BLOCK_ON_RECONNECT boolean false quarkus.log.handler.syslog."syslog-handlers".format The log message format Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__FORMAT string %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n quarkus.log.handler.syslog."syslog-handlers".level The log level specifying what message levels will be logged by the Syslog logger Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__LEVEL Level ALL quarkus.log.handler.syslog."syslog-handlers".filter The name of the filter to link to the file handler. Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__FILTER string quarkus.log.handler.syslog."syslog-handlers".max-length The maximum length, in bytes, of the message allowed to be sent. The length includes the header and the message. If not set, the default value is 2048 when sys-log-type is rfc5424 (which is the default) and 1024 when sys-log-type is rfc3164 Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__MAX_LENGTH MemorySize quarkus.log.handler.syslog."syslog-handlers".async Indicates whether to log asynchronously Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ASYNC boolean false quarkus.log.handler.syslog."syslog-handlers".async.queue-length The queue length to use before flushing writing Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ASYNC_QUEUE_LENGTH int 512 quarkus.log.handler.syslog."syslog-handlers".async.overflow Determine whether to block the publisher (rather than drop the message) when the queue is full Environment variable: QUARKUS_LOG_HANDLER_SYSLOG__SYSLOG_HANDLERS__ASYNC_OVERFLOW block , discard block Log cleanup filters - internal use Type Default quarkus.log.filter."filters".if-starts-with The message prefix to match Environment variable: QUARKUS_LOG_FILTER__FILTERS__IF_STARTS_WITH list of string inherit quarkus.log.filter."filters".target-level The new log level for the filtered message. Defaults to DEBUG. Environment variable: QUARKUS_LOG_FILTER__FILTERS__TARGET_LEVEL Level DEBUG About the MemorySize format A size configuration option recognises string in this format (shown as a regular expression): [0-9]+[KkMmGgTtPpEeZzYy]? . If no suffix is given, assume bytes.
[ "import org.jboss.logging.Logger; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/hello\") public class ExampleResource { private static final Logger LOG = Logger.getLogger(ExampleResource.class); @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { LOG.info(\"Hello\"); return \"hello\"; } }", "package com.example; import org.jboss.logging.Logger; public class MyService { private static final Logger log = Logger.getLogger(MyService.class); 1 public void doSomething() { log.info(\"It works!\"); 2 } }", "package com.example; import io.quarkus.logging.Log; 1 class MyService { 2 public void doSomething() { Log.info(\"Simple!\"); 3 } }", "package com.example; import org.jboss.logging.Logger; @ApplicationScoped class SimpleBean { @Inject Logger log; 1 @LoggerName(\"foo\") Logger fooLog; 2 public void ping() { log.info(\"Simple!\"); fooLog.info(\"Goes to _foo_ logger!\"); } }", "quarkus.log.level=INFO quarkus.log.category.\"org.hibernate\".level=DEBUG", "quarkus.log.category.\"org.hibernate\".min-level=TRACE", "-Dquarkus.log.category.\\\"org.hibernate\\\".level=TRACE", "quarkus.log.handlers=console,mylog", "quarkus.log.category.\"org.apache.kafka.clients\".level=INFO quarkus.log.category.\"org.apache.kafka.common.utils\".level=INFO", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-logging-json</artifactId> </dependency>", "implementation(\"io.quarkus:quarkus-logging-json\")", "%dev.quarkus.log.console.json=false %test.quarkus.log.console.json=false", "quarkus.log.console.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n", "quarkus.log.handler.console.my-console-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category.\"com.example\".handlers=my-console-handler quarkus.log.category.\"com.example\".use-parent-handlers=false", "quarkus.log.file.enable=true quarkus.log.file.path=application.log quarkus.log.file.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n", "quarkus.log.handler.file.my-file-handler.enable=true quarkus.log.handler.file.my-file-handler.path=application.log quarkus.log.handler.file.my-file-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category.\"com.example\".handlers=my-file-handler quarkus.log.category.\"com.example\".use-parent-handlers=false", "quarkus.log.syslog.enable=true quarkus.log.syslog.app-name=my-application quarkus.log.syslog.format=%d{yyyy-MM-dd HH:mm:ss} %-5p [%c] (%t) %s%e%n", "quarkus.log.handler.syslog.my-syslog-handler.enable=true quarkus.log.handler.syslog.my-syslog-handler.app-name=my-application quarkus.log.handler.syslog.my-syslog-handler.format=%d{yyyy-MM-dd HH:mm:ss} [com.example] %s%e%n quarkus.log.category.\"com.example\".handlers=my-syslog-handler quarkus.log.category.\"com.example\".use-parent-handlers=false", "package com.example; import io.quarkus.logging.LoggingFilter; import java.util.logging.Filter; import java.util.logging.LogRecord; @LoggingFilter(name = \"my-filter\") public final class TestFilter implements Filter { private final String part; public TestFilter(@ConfigProperty(name = \"my-filter.part\") String part) { this.part = part; } @Override public boolean isLoggable(LogRecord record) { return !record.getMessage().contains(part); } }", "my-filter.part=TEST", "quarkus.log.console.filter=my-filter", "quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n quarkus.log.console.level=DEBUG quarkus.console.color=false quarkus.log.category.\"io.quarkus\".level=INFO", "quarkus.log.file.enable=true Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.file.level=TRACE quarkus.log.file.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n Set 2 categories (io.quarkus.smallrye.jwt, io.undertow.request.security) to TRACE level quarkus.log.min-level=TRACE quarkus.log.category.\"io.quarkus.smallrye.jwt\".level=TRACE quarkus.log.category.\"io.undertow.request.security\".level=TRACE", "Send output to a trace.log file under the /tmp directory quarkus.log.file.path=/tmp/trace.log quarkus.log.console.format=%d{HH:mm:ss} %-5p [%c{2.}] (%t) %s%e%n Configure a named handler that logs to console quarkus.log.handler.console.\"STRUCTURED_LOGGING\".format=%e%n Configure a named handler that logs to file quarkus.log.handler.file.\"STRUCTURED_LOGGING_FILE\".enable=true quarkus.log.handler.file.\"STRUCTURED_LOGGING_FILE\".format=%e%n Configure the category and link the two named handlers to it quarkus.log.category.\"io.quarkus.category\".level=INFO quarkus.log.category.\"io.quarkus.category\".handlers=STRUCTURED_LOGGING,STRUCTURED_LOGGING_FILE", "configure a named file handler that sends the output to 'quarkus.log' quarkus.log.handler.file.CONSOLE_MIRROR.enable=true quarkus.log.handler.file.CONSOLE_MIRROR.path=quarkus.log attach the handler to the root logger quarkus.log.handlers=CONSOLE_MIRROR", "<build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> 1 <quarkus.log.level>DEBUG</quarkus.log.level> 2 <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build>", "test { systemProperty \"java.util.logging.manager\", \"org.jboss.logmanager.LogManager\" }", "Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.LogFactoryImpl", "<dependency> <groupId>org.jboss.logging</groupId> <artifactId>commons-logging-jboss-logging</artifactId> </dependency>", "implementation(\"org.jboss.logging:commons-logging-jboss-logging\")", "<dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j-jboss-logmanager</artifactId> </dependency>", "implementation(\"org.jboss.logmanager:log4j-jboss-logmanager\")", "<dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>log4j2-jboss-logmanager</artifactId> </dependency>", "implementation(\"org.jboss.logmanager:log4j2-jboss-logmanager\")", "<dependency> <groupId>org.jboss.slf4j</groupId> <artifactId>slf4j-jboss-logmanager</artifactId> </dependency>", "implementation(\"org.jboss.slf4j:slf4j-jboss-logmanager\")", "package me.sample; import io.quarkus.logging.Log; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.jboss.logmanager.MDC; import java.util.UUID; @Path(\"/hello/jboss\") public class GreetingResourceJbossLogging { @GET @Path(\"/test\") public String greeting() { MDC.put(\"request.id\", UUID.randomUUID().toString()); MDC.put(\"request.path\", \"/hello/test\"); Log.info(\"request received\"); return \"hello world!\"; } }", "quarkus.log.console.format=%d{HH:mm:ss} %-5p request.id=%X{request.id} request.path=%X{request.path} [%c{2.}] (%t) %s%n", "08:48:13 INFO request.id=c37a3a36-b7f6-4492-83a1-de41dbc26fe2 request.path=/hello/test [me.sa.GreetingResourceJbossLogging] (executor-thread-1) request received" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/logging_configuration/logging
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1]
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CustomResourceDefinitionSpec describes how a user wants their resource to appear status object CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition 3.1.1. .spec Description CustomResourceDefinitionSpec describes how a user wants their resource to appear Type object Required group names scope versions Property Type Description conversion object CustomResourceConversion describes how to convert different versions of a CR. group string group is the API group of the defined custom resource. The custom resources are served under /apis/<group>/... . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). names object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition preserveUnknownFields boolean preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting x-preserve-unknown-fields to true in spec.versions[*].schema.openAPIV3Schema . See https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning for details. scope string scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are Cluster and Namespaced . versions array versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. versions[] object CustomResourceDefinitionVersion describes a version for CRD. 3.1.2. .spec.conversion Description CustomResourceConversion describes how to convert different versions of a CR. Type object Required strategy Property Type Description strategy string strategy specifies how custom resources are converted between versions. Allowed values are: - None : The converter only change the apiVersion and would not touch any other field in the custom resource. - Webhook : API Server will call to an external webhook to do the conversion. Additional information is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set. webhook object WebhookConversion describes how to call a conversion webhook 3.1.3. .spec.conversion.webhook Description WebhookConversion describes how to call a conversion webhook Type object Required conversionReviewVersions Property Type Description clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook. conversionReviewVersions array (string) conversionReviewVersions is an ordered list of preferred ConversionReview versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail. 3.1.4. .spec.conversion.webhook.clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook. Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 3.1.5. .spec.conversion.webhook.clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path at which the webhook will be contacted. port integer port is an optional service port at which the webhook will be contacted. port should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility. 3.1.6. .spec.names Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.7. .spec.versions Description versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. Type array 3.1.8. .spec.versions[] Description CustomResourceDefinitionVersion describes a version for CRD. Type object Required name served storage Property Type Description additionalPrinterColumns array additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. additionalPrinterColumns[] object CustomResourceColumnDefinition specifies a column for server side printing. deprecated boolean deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false. deprecationWarning string deprecationWarning overrides the default warning returned to API clients. May only be set when deprecated is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists. name string name is the version name, e.g. "v1", "v2beta1", etc. The custom resources are served under this version at /apis/<group>/<version>/... if served is true. schema object CustomResourceValidation is a list of validation methods for CustomResources. served boolean served is a flag enabling/disabling this version from being served via REST APIs storage boolean storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true. subresources object CustomResourceSubresources defines the status and scale subresources for CustomResources. 3.1.9. .spec.versions[].additionalPrinterColumns Description additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. Type array 3.1.10. .spec.versions[].additionalPrinterColumns[] Description CustomResourceColumnDefinition specifies a column for server side printing. Type object Required name type jsonPath Property Type Description description string description is a human readable description of this column. format string format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. jsonPath string jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column. name string name is a human readable name for the column. priority integer priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0. type string type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. 3.1.11. .spec.versions[].schema Description CustomResourceValidation is a list of validation methods for CustomResources. Type object Property Type Description openAPIV3Schema `` openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning. 3.1.12. .spec.versions[].subresources Description CustomResourceSubresources defines the status and scale subresources for CustomResources. Type object Property Type Description scale object CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. status object CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza 3.1.13. .spec.versions[].subresources.scale Description CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. Type object Required specReplicasPath statusReplicasPath Property Type Description labelSelectorPath string labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale status.selector . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status or .spec . Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the status.selector value in the /scale subresource will default to the empty string. specReplicasPath string specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale spec.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .spec . If there is no value under the given path in the custom resource, the /scale subresource will return an error on GET. statusReplicasPath string statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale status.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status . If there is no value under the given path in the custom resource, the status.replicas value in the /scale subresource will default to 0. 3.1.14. .spec.versions[].subresources.status Description CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza Type object 3.1.15. .status Description CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition Type object Property Type Description acceptedNames object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition conditions array conditions indicate state for particular aspects of a CustomResourceDefinition conditions[] object CustomResourceDefinitionCondition contains details for the current condition of this pod. storedVersions array (string) storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list. 3.1.16. .status.acceptedNames Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.17. .status.conditions Description conditions indicate state for particular aspects of a CustomResourceDefinition Type array 3.1.18. .status.conditions[] Description CustomResourceDefinitionCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. type string type is the type of the condition. Types include Established, NamesAccepted and Terminating. 3.2. API endpoints The following API endpoints are available: /apis/apiextensions.k8s.io/v1/customresourcedefinitions DELETE : delete collection of CustomResourceDefinition GET : list or watch objects of kind CustomResourceDefinition POST : create a CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions GET : watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} DELETE : delete a CustomResourceDefinition GET : read the specified CustomResourceDefinition PATCH : partially update the specified CustomResourceDefinition PUT : replace the specified CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} GET : watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status GET : read status of the specified CustomResourceDefinition PATCH : partially update status of the specified CustomResourceDefinition PUT : replace status of the specified CustomResourceDefinition 3.2.1. /apis/apiextensions.k8s.io/v1/customresourcedefinitions Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CustomResourceDefinition Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CustomResourceDefinition Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a CustomResourceDefinition Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 202 - Accepted CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.2. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CustomResourceDefinition Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CustomResourceDefinition Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CustomResourceDefinition Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CustomResourceDefinition Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.4. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status Table 3.27. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CustomResourceDefinition Table 3.29. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CustomResourceDefinition Table 3.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.31. Body parameters Parameter Type Description body Patch schema Table 3.32. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CustomResourceDefinition Table 3.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.34. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.35. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/extension_apis/customresourcedefinition-apiextensions-k8s-io-v1
Chapter 39. host
Chapter 39. host This chapter describes the commands under the host command. 39.1. host list List hosts Usage: Table 39.1. Optional Arguments Value Summary -h, --help Show this help message and exit --zone <zone> Only return hosts in the availability zone Table 39.2. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 39.3. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 39.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 39.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 39.2. host set Set host properties Usage: Table 39.6. Positional Arguments Value Summary <host> Host to modify (name only) Table 39.7. Optional Arguments Value Summary -h, --help Show this help message and exit --enable Enable the host --disable Disable the host --enable-maintenance Enable maintenance mode for the host --disable-maintenance Disable maintenance mode for the host 39.3. host show Display host details Usage: Table 39.8. Positional Arguments Value Summary <host> Name of host Table 39.9. Optional Arguments Value Summary -h, --help Show this help message and exit Table 39.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 39.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 39.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 39.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack host list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--zone <zone>]", "openstack host set [-h] [--enable | --disable] [--enable-maintenance | --disable-maintenance] <host>", "openstack host show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <host>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/host
Chapter 3. Installing the Migration Toolkit for Containers
Chapter 3. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 3.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 3.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi9 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.15 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.15 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 3.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.15, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 3.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 3.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 3.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 3.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 3.4.2.1. NetworkPolicy configuration 3.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 3.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 3.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 3.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 3.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 3.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 3.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 3.4.4. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 3.4.4.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 3.4.4.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 3.4.4.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 3.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider. MTC supports the following storage providers: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 3.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 3.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint, which you need to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for MTC. Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 3.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC) . Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 3.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 3.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 3.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 3.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi9 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`", "AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`", "AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/1.8/html/migration_toolkit_for_containers/installing-mtc
7.12. Creating a Cloned Virtual Machine Based on a Template
7.12. Creating a Cloned Virtual Machine Based on a Template Cloned virtual machines are based on templates and inherit the settings of the template. A cloned virtual machine does not depend on the template on which it was based after it has been created. This means the template can be deleted if no other dependencies exist. Note If you clone a virtual machine from a template, the name of the template on which that virtual machine was based is displayed in the General tab of the Edit Virtual Machine window for that virtual machine. If you change the name of that template, the name of the template in the General tab will also be updated. However, if you delete the template from the Manager, the original name of that template will be displayed instead. Cloning a Virtual Machine Based on a Template Click Compute Virtual Machines . Click New . Select the Cluster on which the virtual machine will run. Select a template from the Based on Template drop-down menu. Enter a Name , Description and any Comments . You can accept the default values inherited from the template in the rest of the fields, or change them if required. Click the Resource Allocation tab. Select the Clone radio button in the Storage Allocation area. Select the disk format from the Format drop-down list. This affects the speed of the clone operation and the amount of disk space the new virtual machine initially requires. QCOW2 (Default) Faster clone operation Optimized use of storage capacity Disk space allocated only as required Raw Slower clone operation Optimized virtual machine read and write operations All disk space requested in the template is allocated at the time of the clone operation Use the Target drop-down menu to select the storage domain on which the virtual machine's virtual disk will be stored. Click OK . Note Cloning a virtual machine may take some time. A new copy of the template's disk must be created. During this time, the virtual machine's status is first Image Locked , then Down . The virtual machine is created and displayed in the Virtual Machines tab. You can now assign users to it, and can begin using it when the clone operation is complete.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Creating_a_cloned_virtual_machine_based_on_a_template
Appendix F. Pools, placement groups, and CRUSH configuration options
Appendix F. Pools, placement groups, and CRUSH configuration options The Ceph options that govern pools, placement groups, and the CRUSH algorithm. mon_allow_pool_delete Description Allows a monitor to delete a pool. In RHCS 3 and later releases, the monitor cannot delete the pool by default as an added measure to protect data. Type Boolean Default false mon_max_pool_pg_num Description The maximum number of placement groups per pool. Type Integer Default 65536 mon_pg_create_interval Description Number of seconds between PG creation in the same Ceph OSD Daemon. Type Float Default 30.0 mon_pg_stuck_threshold Description Number of seconds after which PGs can be considered as being stuck. Type 32-bit Integer Default 300 mon_pg_min_inactive Description Ceph issues a HEALTH_ERR status in the cluster log if the number of PGs that remain inactive longer than the mon_pg_stuck_threshold exceeds this setting. The default setting is one PG. A non-positive number disables this setting. Type Integer Default 1 mon_pg_warn_min_per_osd Description Ceph issues a HEALTH_WARN status in the cluster log if the average number of PGs per OSD in the cluster is less than this setting. A non-positive number disables this setting. Type Integer Default 30 mon_pg_warn_max_per_osd Description Ceph issues a HEALTH_WARN status in the cluster log if the average number of PGs per OSD in the cluster is greater than this setting. A non-positive number disables this setting. Type Integer Default 300 mon_pg_warn_min_objects Description Do not warn if the total number of objects in the cluster is below this number. Type Integer Default 1000 mon_pg_warn_min_pool_objects Description Do not warn on pools whose object number is below this number. Type Integer Default 1000 mon_pg_check_down_all_threshold Description The threshold of down OSDs by percentage after which Ceph checks all PGs to ensure they are not stuck or stale. Type Float Default 0.5 mon_pg_warn_max_object_skew Description Ceph issue a HEALTH_WARN status in the cluster log if the average number of objects in a pool is greater than mon pg warn max object skew times the average number of objects for all pools. A non-positive number disables this setting. Type Float Default 10 mon_delta_reset_interval Description The number of seconds of inactivity before Ceph resets the PG delta to zero. Ceph keeps track of the delta of the used space for each pool to aid administrators in evaluating the progress of recovery and performance. Type Integer Default 10 mon_osd_max_op_age Description The maximum age in seconds for an operation to complete before issuing a HEALTH_WARN status. Type Float Default 32.0 osd_pg_bits Description Placement group bits per Ceph OSD Daemon. Type 32-bit Integer Default 6 osd_pgp_bits Description The number of bits per Ceph OSD Daemon for Placement Groups for Placement purpose (PGPs). Type 32-bit Integer Default 6 osd_crush_chooseleaf_type Description The bucket type to use for chooseleaf in a CRUSH rule. Uses ordinal rank rather than name. Type 32-bit Integer Default 1 . Typically a host containing one or more Ceph OSD Daemons. osd_pool_default_crush_replicated_ruleset Description The default CRUSH ruleset to use when creating a replicated pool. Type 8-bit Integer Default 0 osd_pool_erasure_code_stripe_unit Description Sets the default size, in bytes, of a chunk of an object stripe for erasure coded pools. Every object of size S will be stored as N stripes, with each data chunk receiving stripe unit bytes. Each stripe of N * stripe unit bytes will be encoded/decoded individually. This option can be overridden by the stripe_unit setting in an erasure code profile. Type Unsigned 32-bit Integer Default 4096 osd_pool_default_size Description Sets the number of replicas for objects in the pool. The default value is the same as ceph osd pool set {pool-name} size {size} . Type 32-bit Integer Default 3 osd_pool_default_min_size Description Sets the minimum number of written replicas for objects in the pool in order to acknowledge a write operation to the client. If the minimum is not met, Ceph will not acknowledge the write to the client. This setting ensures a minimum number of replicas when operating in degraded mode. Type 32-bit Integer Default 0 , which means no particular minimum. If 0 , minimum is size - (size / 2) . osd_pool_default_pg_num Description The default number of placement groups for a pool. The default value is the same as pg_num with mkpool . Type 32-bit Integer Default 32 osd_pool_default_pgp_num Description The default number of placement groups for placement for a pool. The default value is the same as pgp_num with mkpool . PG and PGP should be equal. Type 32-bit Integer Default 0 osd_pool_default_flags Description The default flags for new pools. Type 32-bit Integer Default 0 osd_max_pgls Description The maximum number of placement groups to list. A client requesting a large number can tie up the Ceph OSD Daemon. Type Unsigned 64-bit Integer Default 1024 Note Default should be fine. osd_min_pg_log_entries Description The minimum number of placement group logs to maintain when trimming log files. Type 32-bit Int Unsigned Default 250 osd_default_data_pool_replay_window Description The time, in seconds, for an OSD to wait for a client to replay a request. Type 32-bit Integer Default 45
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/pools-placement-groups-and-crush-configuration-options_conf
Chapter 16. GenericKafkaListenerConfigurationBroker schema reference
Chapter 16. GenericKafkaListenerConfigurationBroker schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBroker schema properties Configures broker settings for listeners. Example configuration for the host , nodePort , loadBalancerIP , and annotations properties is shown in the GenericKafkaListenerConfiguration schema section. 16.1. Overriding advertised addresses for brokers By default, Streams for Apache Kafka tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which Streams for Apache Kafka is running might not provide the right hostname or port through which Kafka can be accessed. You can specify a broker ID and customize the advertised hostname and port in the configuration property of the listener. Streams for Apache Kafka will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of listeners. Example of an external route listener configured with overrides for advertised addresses listeners: #... - name: external1 port: 9094 type: route tls: true configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ... Instead of specifying the advertisedHost field for every broker, you can also use an advertisedHostTemplate to generate them automatically. The advertisedHostTemplate supports the following variables: The {nodeId} variable is replaced with the ID of the Kafka node to which the template is applied. The {nodePodName} variable is replaced with the OpenShift pod name for the Kafka node where the template is applied. Example route listener with advertisedHostTemplate configuration listeners: #... - name: external1 port: 9094 type: route tls: true configuration: advertisedHostTemplate: example.hostname.{nodeId} # ... 16.2. GenericKafkaListenerConfigurationBroker schema properties Property Property type Description broker integer ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas. advertisedHost string The host name used in the brokers' advertised.listeners . advertisedPort integer The port number used in the brokers' advertised.listeners . host string The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. nodePort integer Node port for the per-broker service. This field can be used only with nodeport type listener. loadBalancerIP string The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. annotations map Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , or ingress type listeners. labels map Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. externalIPs string array External IPs associated to the nodeport service. These IPs are used by clients external to the OpenShift cluster to access the Kafka brokers. This field is helpful when nodeport without externalIP is not sufficient. For example on bare-metal OpenShift clusters that do not support Loadbalancer service types. This field can only be used with nodeport type listener.
[ "listeners: # - name: external1 port: 9094 type: route tls: true configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342", "listeners: # - name: external1 port: 9094 type: route tls: true configuration: advertisedHostTemplate: example.hostname.{nodeId}" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-generickafkalistenerconfigurationbroker-reference
Chapter 5. Downloading deployment files
Chapter 5. Downloading deployment files To deploy Streams for Apache Kafka components using YAML files, download and extract the latest release archive ( streams-2.9-ocp-install-examples.zip ) from the Streams for Apache Kafka software downloads page . The release archive contains sample YAML files for deploying Streams for Apache Kafka components to OpenShift using oc . Begin by deploying the Cluster Operator from the install/cluster-operator directory to watch a single namespace, multiple namespaces, or all namespaces. In the install folder, you can also deploy other Streams for Apache Kafka components, including: Streams for Apache Kafka administrator roles ( strimzi-admin ) Standalone Topic Operator ( topic-operator ) Standalone User Operator ( user-operator ) Streams for Apache Kafka Drain Cleaner ( drain-cleaner ) The examples folder provides examples of Streams for Apache Kafka custom resources to help you develop your own Kafka configurations. Note Streams for Apache Kafka container images are available through the Red Hat Ecosystem Catalog , but we recommend using the provided YAML files for deployment. 5.1. Deploying the Streams for Apache Kafka Proxy Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself. For more information on connecting to and using the Streams for Apache Kafka Proxy, see the proxy guide in the Streams for Apache Kafka documentation . Important This feature is a technology preview and not intended for a production environment. For more information see the release notes . 5.2. Deploying the Streams for Apache Kafka Console After you have deployed a Kafka cluster that's managed by Streams for Apache Kafka, you can deploy and connect the Streams for Apache Kafka Console to the cluster. The console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface. For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/downloads-str
Chapter 6. Managing thin pools using the Web Console
Chapter 6. Managing thin pools using the Web Console 6.1. Creating a thin pool using the Web Console Follow these instructions to create a logical thin pool using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click + Create new Logical Volume . The Create Logical Volume window opens. Specify a Name for your thin pool. Set Purpose to Pool for thinly provisioned volumes . Specify a Size for your thin pool. Click Create . Your new thin pool appears in the list of logical volumes in this volume group. 6.2. Growing a thin pool using the Web Console Follow these instructions to increase the size of a logical thin pool using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the thin pool. On the Pool subtab, click Grow . The Grow Logical Volume window opens. Specify the new Size of the thin pool. Click Grow . 6.3. Deactivating a thin pool using the Web Console Follow these instructions to deactivate a logical thin pool using the Web Console. This deactivates all thinly provisioned logical volumes in the pool. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the thin pool. Click Deactivate . The thin pool is deactivated. 6.4. Activating a thin pool using the Web Console Follow these instructions to activate a logical thin pool using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the thin pool. Click Activate . The thin pool is activated. This does not activate thin provisioned logical volumes in the pool.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_red_hat_gluster_storage_using_the_web_console/assembly-cockpit-managing_thinpool
Chapter 47. limit
Chapter 47. limit This chapter describes the commands under the limit command. 47.1. limit create Create a limit Usage: Table 47.1. Positional Arguments Value Summary <resource-name> The name of the resource to limit Table 47.2. Optional Arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the limit --region <region> Region for the limit to affect. --project <project> Project to associate the resource limit to --service <service> Service responsible for the resource to limit --resource-limit <resource-limit> The resource limit for the project to assume Table 47.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 47.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 47.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 47.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 47.2. limit delete Delete a limit Usage: Table 47.7. Positional Arguments Value Summary <limit-id> Limit to delete (id) Table 47.8. Optional Arguments Value Summary -h, --help Show this help message and exit 47.3. limit list List limits Usage: Table 47.9. Optional Arguments Value Summary -h, --help Show this help message and exit --service <service> Service responsible for the resource to limit --resource-name <resource-name> The name of the resource to limit --region <region> Region for the registered limit to affect. --project <project> List resource limits associated with project Table 47.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 47.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 47.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 47.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 47.4. limit set Update information about a limit Usage: Table 47.14. Positional Arguments Value Summary <limit-id> Limit to update (id) Table 47.15. Optional Arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the limit --resource-limit <resource-limit> The resource limit for the project to assume Table 47.16. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 47.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 47.18. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 47.19. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 47.5. limit show Display limit details Usage: Table 47.20. Positional Arguments Value Summary <limit-id> Limit to display (id) Table 47.21. Optional Arguments Value Summary -h, --help Show this help message and exit Table 47.22. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 47.23. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 47.24. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 47.25. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack limit create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--region <region>] --project <project> --service <service> --resource-limit <resource-limit> <resource-name>", "openstack limit delete [-h] <limit-id> [<limit-id> ...]", "openstack limit list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--service <service>] [--resource-name <resource-name>] [--region <region>] [--project <project>]", "openstack limit set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--resource-limit <resource-limit>] <limit-id>", "openstack limit show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <limit-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/limit
Chapter 2. Determining permission policy and role configuration source
Chapter 2. Determining permission policy and role configuration source You can configure Red Hat Developer Hub policy and roles by using different sources. To maintain data consistency, Developer Hub associates each permission policy and role with one unique source. You can only use this source to change the resource. The available sources are: Configuration file Configure roles and policies in the Developer Hub app-config.yaml configuration file, for instance to declare your policy administrators . The Configuration file pertains to the default role:default/rbac_admin role provided by the RBAC plugin. The default role has limited permissions to create, read, update, delete permission policies or roles, and to read catalog entities. Note In case the default permissions are insufficient for your administrative requirements, you can create a custom admin role with the required permission policies. REST API Configure roles and policies by using the Developer Hub Web UI or by using the REST API. CSV file Configure roles and policies by using external CSV files. Legacy The legacy source applies to policies and roles defined before RBAC backend plugin version 2.1.3 , and is the least restrictive among the source location options. Important Replace the permissions and roles using the legacy source with the permissions using the REST API or the CSV file sources. Procedure To determine the source of a role or policy, use a GET request.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authorization/proc-determining-policy-and-role-source
Chapter 13. Configuring logs
Chapter 13. Configuring logs The Certificate System subsystem log files record events related to operations within that specific subsystem instance. For each subsystem, different logs are kept for issues such as installation, access, and web servers. All subsystems have similar log configuration, options, and administrative paths. For details about log administration after the installation, see Chapter 12 Configuring Subsystem Logs in the Administration Guide (Common Criteria Edition) . For an overview on logs, see Section 2.3.14, "Logs" . 13.1. Log settings The way that logs are configured can affect Certificate System performance. For example, log file rotation keeps logs from becoming too large, which slows down subsystem performance. This section explains the different kinds of logs recorded by Certificate System subsystems and covers important concepts such as log file rotation, buffered logging, and available log levels. 13.1.1. Services that are logged All major components and protocols of Certificate System log messages to log files. The following table lists services that are logged by default. To view messages logged by a specific service, customize log settings accordingly. For details, see Section 13.1.5, "Signing log files" Table 13.1. Services logged Service Description ACLs Logs events related to access control lists. Administration Logs events related to administration activities, such as HTTPS communication between the Console and the instance. All Logs events related to all the services. Authentication Logs events related to activity with the authentication module. Certificate Authority Logs events related to the Certificate Manager. Database Logs events related to activity with the internal database. HTTP Logs events related to the HTTP activity of the server. Note that HTTP events are actually logged to the errors log belonging to the Apache server incorporated with the Certificate System to provide HTTP services. Key Recovery Authority Logs events related to the KRA. LDAP Logs events related to activity with the LDAP directory, which is used for publishing certificates and CRLs. OCSP Logs events related to OCSP, such as OCSP status GET requests. Others Logs events related to other activities, such as command-line utilities and other processes. Request Queue Logs events related to the request queue activity. User and Group Logs events related to users and groups of the instance. 13.1.2. Log levels (message categories) The different events logged by Certificate System services are determined by the log levels, which makes identifying and filtering events simpler. The different Certificate System log levels are listed in Table 13.2, "Log levels and corresponding log messages" . Log levels are represented by numbers 0 to 10 , each number indicating the level of logging to be performed by the server. The level sets how detailed the logging should be. A higher priority level means less detail because only events of high priority are logged. Note The default log level is 1 and this value should not be changed. To enable debug logging, see Section 13.3.3, "Additional configuration for debug log" . The following table is provided for reference to better understand log messages. Table 13.2. Log levels and corresponding log messages Log level Message category Description 0 Debugging These messages contain debugging information. This level is not recommended for regular use because it generates too much information. 1 Informational (default selection for audit log) These messages provide general information about the state of the Certificate System, including status messages such as Certificate System initialization complete and Request for operation succeeded . 2 Warning These messages are warnings only and do not indicate any failure in the normal operation of the server. 3 Failure; the default selection for system and error logs These messages indicate errors and failures that prevent the server from operating normally, including failures to perform a certificate service operation ( User authentication failed or Certificate revoked ) and unexpected situations that can cause irrevocable errors ( The server cannot send back the request it processed for a client through the same channel the request came from the client ). 4 Misconfiguration These messages indicate that a misconfiguration in the server is causing an error. 5 Catastrophic failure These messages indicate that, because of an error, the service cannot continue running. 6 Security-related events These messages identify occurrences that affect the security of the server. For example, Privileged access attempted by user with revoked or unlisted certificate. 7 PDU-related events (debugging) These messages contain debugging information for PDU events. This level is not recommended for regular use because it generates more information than is normally useful. 8 PDU-related events These messages relate transactions and rules processed on a PDU, such as creating MAC tokens. 9 PDU-related events This log levels provides verbose log messages for events processed on a PDU, such as creating MAC tokens. 10 All logging levels This log level enables all logging levels. Log levels can be used to filter log entries based on the severity of an event. The log level is successive; specifying a value of 3 causes levels 4, 5, and 6 to be logged. Log data can be extensive, especially at lower (more verbose) logging levels. Make sure that the host machine has sufficient disk space for all the log files. It is also important to define the logging level, log rotation, and server-backup policies appropriately so that all the log files are backed up and the host system does not get overloaded; otherwise, information can be lost. 13.1.3. Buffered and unbuffered logging The Java subsystems support buffered logging for all types of logs. The server can be configured for either buffered or unbuffered logging. If buffered logging is configured, the server creates buffers for the corresponding logs and holds the messages in the buffers for as long as possible. The server flushes out the messages to the log files only when one of the following conditions occurs: The buffer gets full. The buffer is full when the buffer size is equal to or greater than the value specified by the bufferSize configuration parameter. The default value for this parameter is 512 KB. The flush interval for the buffer is reached. The flush interval is reached when the time interval since the last buffer flush is equal to or greater than the value specified by the flushInterval configuration parameter. The default value for this parameter is 5 seconds. When current logs are read from Console. The server retrieves the latest log when it is queried for current logs. If the server is configured for unbuffered logging, the server flushes out messages as they are generated to the log files. Because the server performs an I/O operation (writing to the log file) each time a message is generated, configuring the server for unbuffered logging decreases performance. Setting log parameters is described in Chapter 13, Configuring logs . 13.1.4. Log file rotation The subsystem logs have an optional log setting that allows them to be rotated and start a new log file instead of letting log files grow indefinitely. Log files are rotated when either of the following occur: The size limit for the corresponding file is reached. The size of the corresponding log file is equal to or greater than the value specified by the maxFileSize configuration parameter. The default value for this parameter is 2000 KB. The age limit for the corresponding file is reached. The corresponding log file is equal to or older than the interval specified by the rolloverInterval configuration parameter. The default value for this parameter is 2592000 seconds (every thirty days). Note Setting both these parameters to 0 effectively disables the log file rotation. When a log file is rotated, the old file is named using the name of the file with an appended time stamp. The appended time stamp is an integer that indicates the date and time the corresponding active log file was rotated. The date and time have the forms YYYYMMDD (year, month, day) and HHMMSS (hour, minute, second). Log files, especially the audit log file, contain critical information. These files should be periodically archived to some backup medium by copying the entire log directory to an archive medium. Note Certificate System does not provide any tool or utility for archiving log files. Section 13.1.5, "Signing log files" suggests ways to ensure the integrity of the log files to be archived. 13.1.5. Signing log files As an alternative to the signed audit log feature ( Section 13.3.1.1, "Enabling signed audit logging" ), which creates audit logs where audit entries are automatically signed, log files can be signed by using COTS tools such as gpg , before they are archived or distributed for audit purposes. Doing so will allow to check files for tampering. 13.2. Operating system (external to RHCS) log settings 13.2.1. Enabling OS-level audit logs Warning All operations in the following sections have to be performed as root or a privileged user via sudo . The auditd logging framework provides many additional audit capabilities. These OS-level audit logs complement functionality provided by Certificate System directly. Before performing any of the following steps in this section, make sure the audit package is installed: Auditing of system package updates (using yum and rpm and including Certificate System) is automatically performed and requires no additional configuration. Note After adding each audit rule and restarting the auditd service, validate the new rules were added by running: The contents of the new rules should be visible in the output. For instructions on viewing the resulting audit logs, see 12.2.3 Displaying OS-level audit logs in the Administration Guide (Common Criteria Edition) . 13.2.1.1. Auditing Certificate System audit log deletion To receive audit events for when audit logs are deleted, you need to audit system calls whose targets are Certificate System logs. Create the file /etc/audit/rules.d/rhcs-audit-log-deletion.rules with the following contents: Then restart auditd : 13.2.1.2. Auditing unauthorized Certificate System use of secret keys To receive audit events for all access to Certificate System Secret or Private keys, you need to audit the file system access to the nssdb. Create the /etc/audit/rules.d/rhcs-audit-nssdb-access.rules file with the following contents: <instance name> is the name of the current instance. For each file ( <file> ) in /etc/pki/<instance name>/alias , add to /etc/audit/rules.d/rhcs-audit-nssdb-access.rules the following line : For example, if the instance name is pki-ca121318ec and cert9.db , key4.db , NHSM-CONN-XCcert9.db , NHSM-CONN-XCkey4.db , and pkcs11.txt are files, then the configuration file would contain: Then restart auditd : 13.2.1.3. Auditing time change events To receive audit events for time changes, you need to audit a system call access which could modify the system time. Create the /etc/audit/rules.d/rhcs-audit-rhcs_audit_time_change.rules file with the following contents: Then restart auditd : For instructions on how to set time, see Section 7.13.1, "Setting date and time for RHCS" . 13.2.1.4. Auditing access to Certificate System configuration To receive audit events for all modifications to the Certificate System instance configuration files, audit the file system access for these files. Create the /etc/audit/rules.d/rhcs-audit-config-access.rules file with the following contents: Additionally, add for each subsystem in the /etc/pki/instance_name/ directory the following contents: Example 13.1. rhcs-audit-config-access.rules configuration file For example, if the instance name is pki-ca121318ec and only a CA is installed, the /etc/audit/rules.d/rhcs-audit-config-access.rules file would contain: Note that access to the PKI NSS database is already audited under rhcs_audit_nssdb . 13.3. Configuring Logs in the CS.cfg File During the installation configuration, you can configure the logging by directly editing the CS.cfg for the instance. Stop the subsystem instance. Open the CS.cfg file in the /var/lib/pki/<instance_name>/<subsystem_type>/conf directory. Create a new log. To configure a log instance, modify the parameters associated with that log. These parameters begin with log.instance . Table 13.3. Log entry parameters Parameter Description type The type of log file. e.g. signedAudit . enable Sets whether the log is active. Only enabled logs actually record events. level Sets the log level in the text field. The level must be manually entered in the field; there is no selection menu. The log level setting is a numeric value, as listed in Section 13.1.2, "Log levels (message categories)" . fileName The full path, including the file name, to the log file. The subsystem user should have read/write permission to the file. bufferSize The buffer size in kilobytes (KB) for the log. Once the buffer reaches this size, the contents of the buffer are flushed out and copied to the log file. The default size is 512 KB. For more information on buffered logging, see Section 13.1.3, "Buffered and unbuffered logging" . flushInterval The amount of time, in seconds, before the contents of the buffer are flushed out and added to the log file. The default interval is 5 seconds. maxFileSize The size, kilobytes (KB), a log file can become before it is rotated. Once it reaches this size, the file is copied to a rotated file, and the log file is started new. For more information on log file rotation, see Section 13.1.4, "Log file rotation" . The default size is 2000 KB. rolloverInterval The frequency which the server rotates the active log file. The available choices are hourly, daily, weekly, monthly, and yearly. The default value is 2592000, which represents monthly in seconds. is monthly. For more information, see Section 13.1.4, "Log file rotation" . register [a] This variable is set to false by default and should remain so due to feature improvements. The self-test messages are only logged to the log file specified by selftests.container.logger.fileName . logSigning [b] Enables signed logging. When this parameter is enabled, provide a value for the signedAuditCertNickname parameter. This option means the log can only be viewed by an auditor. The value is either true or false . signedAuditCertNickname [c] The nickname of the certificate used to sign audit logs. The private key for this certificate must be accessible to the subsystem in order for it to sign the log. events [d] Specifies which events are logged to the audit log. Log events are separated by commas with no spaces. [a] register is for self-test logs only. [b] logSigning is for audit logs only. [c] signedAuditCertNickname is for audit logs only. [d] events is for audit logs only. Save the file. Start the subsystem instance. OR if using the Nuxwdog watchdog: 13.3.1. Enabling and configuring signed audit log 13.3.1.1. Enabling signed audit logging By default, audit logging is enabled upon installation. However, log signing needs to be enabled manually after installation. To display the current audit logging configuration, use the following command: pki-server subsystem-audit-config-show -i <instance_name> . For example, for a CA subsystem: To enable signed audit logging, use the pki-server utility to set the --logSigning option to true : Stop the instance: OR if using the Nuxwdog watchdog: Run the pki-server <subsystem> -audit-config-mod command, for example for a CA subsystem: Start the instance: OR if using the Nuxwdog watchdog: 13.3.1.2. Configuring audit events 13.3.1.2.1. Enabling and disabling audit events For details about enabling and disabling audit events, see 12.1.2.3 Configuring a Signed Audit Log in the Console in the Administration Guide (Common Criteria Edition) . In addition, audit event filters can be set to finer grained selection. See Section 13.3.1.2.2, "Filtering audit events" . For a complete list of auditable events in Certificate System, see Appendix E Audit events appendix in the Administration Guide (Common Criteria Edition) . 13.3.1.2.2. Filtering audit events In Certificate System administrators can set filters to configure which audit events will be logged in the audit file based on the event attributes. The format of the filters is the same as for LDAP filters. However, Certificate System only supports the following filters: Table 13.4. Supported audit event filters Type Format Example Presence ( attribute =*) (ReqID=*) Equality ( attribute = value ) (Outcome=Failure) Substring ( attribute = initial * any *... * any * final ) (SubjectID=*admin*) AND operation (&( filter_1 )( filter_2 )... ( filter_n )) (&(SubjectID=admin)(Outcome=Failure)) OR operation (|( filter_1 )( filter_2 )... ( filter_n )) (|(SubjectID=admin)(Outcome=Failure)) NOT operation (!( filter )) (!(SubjectID=admin)) For further details on LDAP filters, see the Using Compound Search Filters in the Red Hat Directory Server Administration Guide . Example 13.2. Filtering audit events To display the current settings for profile certificate requests: To display the current settings for processed certificate requests: To log only failed events for profile certificate requests and events for processed certificate requests that have the InfoName field set to rejectReason or cancelReason : Stop Certificate System: OR if using the Nuxwdog watchdog: Run the following command for profile certificate requests,: This results in the following entry in the CA's CS.cfg : Run the following command for processed certificate requests: This results in the following entry in the CA's CS.cfg : Start Certificate System: OR if using the Nuxwdog watchdog: 13.3.2. Configuring self-tests The self-tests feature and individual self-tests are registered and configured in the CS.cfg file. If a self-test is enabled, that self-test is listed for either on-demand or start up and is either empty or set as critical . Critical self-tests have a colon and the word critical after the name of the self-test. Otherwise, nothing is in this place. The server shuts down when a critical self-test fails during on demand self-tests; the server will not start when a critical self-test fails during start up. The implemented self-tests are automatically registered and configured when the instance was installed. The self-tests that are registered and configured are those associated with the subsystem type. A self-test's criticality is changed by changing the respective settings in the CS.cfg file. 13.3.2.1. Default self-tests at startup The following self-tests are enabled by default at startup. For the CA subsystem, the following self-tests are enabled by default at startup: CAPresence - used to verify the presence of the CA subsystem. CAValidity - used to determine that the CA subsystem is currently valid and has not expired. This involves checking the expiration of the CA certificate. SystemCertsVerification - used to determine that the system certificates are currently valid and have not expired or been revoked. For the CA subsystem, only validity tests for each certificate are done, leaving out certificate verification tests which could result in an OCSP request. This behavior can be overridden with the following config parameter: By default, this configuration parameter is considered false if not present at all. For the KRA subsystem, the following-self-tests are enabled: KRAPresence - used to verify the presence of the KRA subsystem. SystemCertsVerification - used to determine that the system certificates are currently valid and have not expired or been revoked. For the OCSP subsystem, the following self-tests are enabled: OCSPPresence - used to verify the presence of the OCSP subsystem. OCSPValidity - used to determine that the OCSP subsystem is currently valid and has not expired. This involves checking the expiration of the OCSP certificate. SystemCertsVerification - used to determine that the system certificates are currently valid and have not expired or been revoked. For the OCSP subsystem, only validity tests for each certificate are done, leaving out certificate verification tests which could result in an OCSP request. This behavior can be overridden with the following config parameter: By default, this configuration parameter is considered false if not present at all. For the TKS subsystem, the following-self-tests are enabled: SystemCertsVerification - used to determine that the system certificates are currently valid and have not expired or been revoked. For the TPS subsystem, the following-self-tests are enabled: TPSPresence - used to verify the presence of the TPS subsystem. TPSValidity - used to determine that the TPS subsystem is currently valid and has not expired. This involves checking the expiration of the TPS certificate. SystemCertsVerification - used to determine that the system certificates are currently valid and have not expired or been revoked. 13.3.2.2. Modifying self-test configuration By default, the self-test configuration is compliant. However, some settings can change the visibility of self-test logging or improve performance. To modify the configuration settings for self-tests: Stop the subsystem instance. Open the CS.cfg file located in the instance's conf/ directory. To edit the settings for the self-test log, edit the entries that begin with selftests.container.logger . Unless otherwise specified, these parameters do not affect compliance. These include the following parameters: bufferSize - Specify the buffer size in kilobytes (KB) for the log. The default size is 512 KB. Once the buffer reaches this size, the contents of the buffer are flushed out and copied to the log file. enable - Specify true to enable. Only enabled logs actually record events. This value must be enabled for compliance. fileName - Specify the full path, including the filename, to the file to write messages. The server must have read/write permission to the file. By default, the self-test log file is /selftests.log flushInterval - Specify the interval, in seconds, to flush the buffer to the file. The default interval is 5 seconds. The flushInterval is the amount of time before the contents of the buffer are flushed out and added to the log file. level - The default selection is 1; this log is not set up for any level beside 1. maxFileSize - Specify the file size in kilobytes (KB) for the error log. The default size is 2000 KB. The maxFileSize determines how large a log file can become before it is rotated. Once it reaches this size, the file is copied to a rotated file, and a new log file is started. register - This variable is set to false by default and should remain so due to feature improvements. The self-test messages are only logged to the log file specified by selftests.container.logger.fileName . rolloverInterval - Specify the frequency at which the server rotates the active error log file. The choices are hourly, daily, weekly, monthly, and yearly. The default value is 2592000 , which represents monthly in seconds. To edit the order in which the self-test are run, specify the order by listing any of the self-test as the value of the following parameters separated by a comma and a space. To mark a self-test critical, add a colon and the word critical to the name of the self-test in the list. To disable a self-test, remove it as the value of either the selftests.container.order.onDemand or selftests.container.order.startup parameters. Save the file. Start the subsystem. 13.3.3. Additional configuration for debug log 13.3.3.1. Enabling and disabling debug logging By default, debug logging is enabled in Certificate System. However, in certain situations, Administrators want to disable or re-enable this feature: Stop the Certificate System instance: OR if using the Nuxwdog watchdog: Edit the /CS.cfg file and set the debug.enabled parameter: To disable debug logging, set: Note Debug logs are not part of audit logging. Debug logs are helpful when trying to debug specific failures in Certificate System or a failing installation. By default, debug logs are enabled. If it is not desired, the administrator can safely disable debug logging to turn down verbosity. To enable debug logging, set: Start the Certificate System instance: OR if using the Nuxwdog watchdog: 13.3.3.2. Setting up rotation of debug log files Certificate System is not able to rotate debug logs. Debug logging is enabled by default and these logs grow until the file system is full. Use an external utility, such as logrotate , to rotate the logs. Example 13.3. Using logrotate to Rotate Debug Logs Create a configuration file, such as /etc/logrotate.d/rhcs_debug with the following content: To rotate debug logs for multiple subsystems in one configuration file, list the paths to the logs, each separated by white space, before the opening curly bracket. For example: For further details about logrotate and the parameters used in the example, see the logrotate(8) man page. 13.4. Audit retention Audit data are required to be retained in a way according to their retention categories: Extended Audit Retention: Audit data that is retained for necessary maintenance for a certificate's lifetime (from issuance to its expiration or revocation date). In Certificate System, they appear in the following areas: Signed audit logs: All events defined in Appendix E Audit events in the Administration Guide (Common Criteria Edition) . In the CA's internal LDAP server, certificate request records received by the CA and the certificate records as the requests are approved. Normal Audit Retention: Audit data that is typically retained only to support normal operation. This includes all events that do not fall under the extended audit retention category. Note Certificate System does not provide any interface to modify or delete audit data. 13.4.1. Location of audit data This section explains where Certificate System stores audit data and where to find the expiration date which plays a crucial role to determine the retention category. 13.4.1.1. Location of audit logs Certificate System stores audit logs in the /var/log/pki/instance_name/subsystem_type/signedAudit/ directory. For example, the audit logs of a CA are stored in the /var/log/pki/instance_name/ca/signedAudit/ directory. Normal users cannot access files in this directory. See 12.1.2 Managing Audit logs in the Administration Guide (Common Criteria Edition). For a list of audit log events that need to follow the extended audit retention period, see Appendix E Audit events in the Administration Guide (Common Criteria Edition) . Important Do not delete any audit logs that contain any events listed in the "Extended Audit Events" appendix. These audit logs will consume storage space potentially up to all space available in the disk partition. 13.4.1.2. Location of certificate requests and certificate records When certificate signing requests (CSR) are submitted, the CA stores the CSRs in the request repository provided by the CA's internal Directory Server. When these requests are approved, each certificate issued successfully, will result in an LDAP record being created in the certificate repository by the same internal Directory Server. The CA's internal Directory Server was specified in the following parameters when the CA was created using the pkispawn utility: pki_ds_hostname pki_ds_ldap_port pki_ds_database pki_ds_base_dn If a certificate request has been approved successfully, the validity of the certificate can be viewed by accessing the CA EE portal either by request ID or by serial number. To display the validity for a certificate request record: Log into the CA EE portal under https:// host_name :_port_/ca/ee/ca/ . Click Check Request Status . Enter the Request Identifier. Click Issued Certificate . Search for Validity . To display the validity from a certificate record: Log into the CA EE portal under https:// host_name :_port_/ca/ee/ca/ . Enter the serial number range. If you search for one specific record, enter the record's serial number in both the lowest and highest serial number field. Click on the search result. Search for Validity . Important Do not delete the request of the certificate records of the certificates that have not yet expired.
[ "sudo yum install audit", "auditctl -l", "-a always,exit -F arch=b32 -S unlink -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S rename -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S rmdir -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S unlinkat -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S renameat -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S unlink -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S rename -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S rmdir -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S unlinkat -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S renameat -F dir=/var/log/pki -F key=rhcs_audit_deletion", "service auditd restart", "-w /etc/pki/<instance name>/alias -p warx -k rhcs_audit_nssdb", "-w /etc/pki/<instance name>/alias/<file> -p warx -k rhcs_audit_nssdb", "-w /etc/pki/pki-ca121318ec/alias -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/cert9.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/key4.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/NHSM-CONN-XCcert9.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/NHSM-CONN-XCkey4.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/pkcs11.txt -p warx -k rhcs_audit_nssdb", "service auditd restart", "-a always,exit -F arch=b32 -S adjtimex,settimeofday,stime -F key=rhcs_audit_time_change -a always,exit -F arch=b64 -S adjtimex,settimeofday -F key=rhcs_audit_time_change -a always,exit -F arch=b32 -S clock_settime -F a0=0x0 -F key=rhcs_audit_time_change -a always,exit -F arch=b64 -S clock_settime -F a0=0x0 -F key=rhcs_audit_time_change -a always,exit -F arch=b32 -S clock_adjtime -F key=rhcs_audit_time_change -a always,exit -F arch=b64 -S clock_adjtime -F key=rhcs_audit_time_change -w /etc/localtime -p wa -k rhcs_audit_time_change", "service auditd restart", "-w /etc/pki/instance_name/server.xml -p wax -k rhcs_audit_config", "-w /etc/pki/instance_name/subsystem/CS.cfg -p wax -k rhcs_audit_config", "-w /etc/pki/pki-ca121318ec/server.xml -p wax -k rhcs_audit_config -w /etc/pki/pki-ca121318ec/ca/CS.cfg -p wax -k rhcs_audit_config", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "systemctl start pki-tomcatd@instance_name.service", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "pki-server ca -audit-config-show -i rhcs10-RSA-SubCA Enabled: True Log File: var/lib/pki/rhcs10-RSA-SubCA/logs/ca/signedAudit/ca_audit Buffer Size (bytes): 512 Flush Interval (seconds): 5 Max File Size (bytes): 2000 Rollover Interval (seconds): 2592000 Expiration Time (seconds): 0 Log Signing: False Signing Certificate: NHSM-CONN-XC:auditSigningCert cert-rhcs10-RSA-SubCA CA", "pki-server subsystem -audit-config-mod --logSigning True -i instance_name", "systemctl stop pki-tomcatd@ instance_name .service", "systemctl stop pki-tomcatd-nuxwdog@ instance_name .service", "pki-server ca -audit-config-mod --logSigning True -i rhcs10-RSA-SubCA Log Signing: True", "systemctl start pki-tomcatd@ instance_name .service", "systemctl start pki-tomcatd-nuxwdog@ instance_name .service", "pki-server ca-audit-event-show PROFILE_CERT_REQUEST -i <instance_name> Event Name: PROFILE_CERT_REQUEST Enabled: True Filter: None", "*pki-server ca-audit-event-show CERT_REQUEST_PROCESSED -i <instance_name>* Event Name: CERT_REQUEST_PROCESSED Enabled: True Filter: None", "systemctl stop pki-tomcatd@instance_name.service", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "pki-server ca-audit-event-update PROFILE_CERT_REQUEST --filter \"(Outcome=Failure)\" i <instance_name> Filter: (Outcome=Failure)", "log.instance.SignedAudit.filters.PROFILE_CERT_REQUEST=(Outcome=Failure)", "pki-server ca-audit-event-update CERT_REQUEST_PROCESSED --filter \"(|(InfoName=rejectReason)(InfoName=cancelReason))\" i <instance_name> Filter: (|(InfoName=rejectReason)(InfoName=cancelReason))", "log.instance.SignedAudit.filters.CERT_REQUEST_PROCESSED=(|(InfoName=rejectReason)(InfoName=cancelReason))", "systemctl start pki-tomcatd@instance_name.service", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "selftests.plugin.SystemCertsVerification.FullCAandOCSPVerify=true", "selftests.plugin.SystemCertsVerification.FullCAandOCSPVerify=true", "systemctl stop [email protected]", "systemctl stop [email protected]", "debug.enabled=false", "debug.enabled=true", "systemctl start [email protected]", "systemctl start [email protected]", "/var/log/pki/instance_name/subsystem/debug { copytruncate weekly rotate 5 notifempty missingok }", "/var/log/pki/instance_name/ca/debug /var/log/pki/instance_name/kra/debug { }" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/configuring_logs
8.4.2.3. dump/restore: Not Recommended for Mounted File Systems!
8.4.2.3. dump/restore: Not Recommended for Mounted File Systems! The dump and restore programs are Linux equivalents to the UNIX programs of the same name. As such, many system administrators with UNIX experience may feel that dump and restore are viable candidates for a good backup program under Red Hat Enterprise Linux. However, one method of using dump can cause problems. Here is Linus Torvald's comment on the subject: Given this problem, the use of dump / restore on mounted file systems is strongly discouraged. However, dump was originally designed to backup unmounted file systems; therefore, in situations where it is possible to take a file system offline with umount , dump remains a viable backup technology.
[ "From: Linus Torvalds To: Neil Conway Subject: Re: [PATCH] SMP race in ext2 - metadata corruption. Date: Fri, 27 Apr 2001 09:59:46 -0700 (PDT) Cc: Kernel Mailing List <linux-kernel At vger Dot kernel Dot org> [ linux-kernel added back as a cc ] On Fri, 27 Apr 2001, Neil Conway wrote: > > I'm surprised that dump is deprecated (by you at least ;-)). What to > use instead for backups on machines that can't umount disks regularly? Note that dump simply won't work reliably at all even in 2.4.x: the buffer cache and the page cache (where all the actual data is) are not coherent. This is only going to get even worse in 2.5.x, when the directories are moved into the page cache as well. So anybody who depends on \"dump\" getting backups right is already playing Russian roulette with their backups. It's not at all guaranteed to get the right results - you may end up having stale data in the buffer cache that ends up being \"backed up\". Dump was a stupid program in the first place. Leave it behind. > I've always thought \"tar\" was a bit undesirable (updates atimes or > ctimes for example). Right now, the cpio/tar/xxx solutions are definitely the best ones, and will work on multiple filesystems (another limitation of \"dump\"). Whatever problems they have, they are still better than the _guaranteed_(*) data corruptions of \"dump\". However, it may be that in the long run it would be advantageous to have a \"filesystem maintenance interface\" for doing things like backups and defragmentation.. Linus (*) Dump may work fine for you a thousand times. But it _will_ fail under the right circumstances. And there is nothing you can do about it." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-disaster-backups-tech-dump
Chapter 20. kubernetes
Chapter 20. kubernetes The namespace for Kubernetes-specific metadata Data type group 20.1. kubernetes.pod_name The name of the pod Data type keyword 20.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 20.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 20.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 20.5. kubernetes.host The Kubernetes node name Data type keyword 20.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 20.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 20.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 20.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 20.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 20.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 20.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 20.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 20.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 20.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 20.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 20.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 20.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 20.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 20.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 20.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 20.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 20.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 20.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 20.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 20.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 20.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 20.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields
Chapter 3. Migrating Data Grid configuration
Chapter 3. Migrating Data Grid configuration Find changes to Data Grid configuration that affect migration to Data Grid 8. 3.1. Data Grid cache configuration Data Grid 8 provides empty cache containers by default. When you start Data Grid, it instantiates a cache manager so you can create caches at runtime. However, in comparison with versions, there is no "default" cache out of the box. In Data Grid 8, caches that you create through the CacheContainerAdmin API are permanent to ensure that they survive cluster restarts. Permanent caches .administration() .withFlags(AdminFlag.PERMANENT) 1 .getOrCreateCache("myPermanentCache", "org.infinispan.DIST_SYNC"); 1 AdminFlag.PERMANENT is enabled by default to ensure that caches survive restarts. You do not need to set this flag when you create caches. However, you must separately add persistent storage to Data Grid for data to survive restarts, for example: ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore() .location("/tmp/myDataStore") .maxEntries(5000); Volatile caches .administration() .withFlags(AdminFlag.VOLATILE) 1 .getOrCreateCache("myTemporaryCache", "org.infinispan.DIST_SYNC"); 2 1 Sets the VOLATILE flag so caches are lost when Data Grid restarts. 2 Returns a cache named "myTemporaryCache" or creates one using the DIST_SYNC template. Data Grid 8 provides cache templates for server installations that you can use to create caches with recommended settings. You can get a list of available cache templates as follows: Use Tab auto-completion with the CLI: Use the REST API: 3.1.1. Cache encoding When you create remote caches you should configure the MediaType for keys and values. Configuring the MediaType guarantees the storage format for your data. To encode caches, you specify the MediaType in your configuration. Unless you have others requirements, you should use ProtoStream, which stores your data in a language-neutral, backwards compatible format. <encoding media-type="application/x-protostream"/> Distributed cache configuration with encoding <infinispan> <cache-container> <distributed-cache name="myCache" mode="SYNC"> <encoding media-type="application/x-protostream"/> ... </distributed-cache> </cache-container> </infinispan> If you do not encode remote caches, Data Grid Server logs the following message: In a future version, cache encoding will be required for operations where data conversion takes place; for example, cache indexing and searching the data container, remote task execution, reading and writing data in different formats from the Hot Rod and REST endpoints, as well as using remote filters, converters, and listeners. 3.1.2. Cache health status Data Grid 7.x includes a Health Check API that returns health status of the cluster as well as caches within it. Data Grid 8 also provides a Health API. For embedded and server installations, you can access the Health API via JMX with the following MBean: Data Grid Server also exposes the Health API through the REST endpoint and the Data Grid Console. Table 3.1. Health Status 7.x 8.x Description HEALTHY HEALTHY Indicates a cache is operating as expected. Rebalancing HEALTHY_REBALANCING Indicates a cache is in the rebalancing state but otherwise operating as expected. Unhealthy DEGRADED Indicates a cache is not operating as expected and possibly requires troubleshooting. N/A FAILED Added in 8.2 to indicate that a cache could not start with the supplied configuration. Additional resources Configuring Data Grid Caches 3.1.3. Changes to the Data Grid 8.1 configuration schema This topic lists changes to the Data Grid configuration schema between 8.0 and 8.1. New and modified elements and attributes stack adds support for inline JGroups stack definitions. stack.combine and stack.position attributes let you override and modify JGroups stack definitions. metrics lets you configure how Data Grid exports metrics that are compatible with the Eclipse MicroProfile Metrics API. context-initializer lets you specify a SerializationContextInitializer implementation that initializes a Protostream-based marshaller for user types. key-transformers lets you register transformers that convert custom keys to String for indexing with Lucene. statistics now defaults to "false". Deprecated elements and attributes The following elements and attributes are now deprecated: address-count attribute for the off-heap element. protocol attribute for the transaction element. duplicate-domains attribute for the jmx element. advanced-externalizer custom-interceptors state-transfer-executor transaction-protocol Removed elements and attributes The following elements and attributes were deprecated in a release and are now removed: deadlock-detection-spin compatibility write-skew versioning data-container eviction eviction-thread-policy 3.1.4. Changes to the Data Grid 8.2 configuration schema This topic lists changes to the Data Grid configuration schema between 8.1 and 8.2. Modified elements and attributes white-list changes to allow-list role is now a sub-element of roles for defined user roles and permissions for security authorization. context-initializer is updated for automatic SerializationContextInitializer registration. If your configuration does not contain context-initializer elements then the java.util.ServiceLoader mechanism automatically discovers all SerializationContextInitializer implementations on the classpath and loads them. Default value of the minOccurs attribute changes from 0 to 1 for the indexed-entity element. New elements and attributes property attribute added to the transport element that lets you pass name/value transport properties. cache-size and cache-timeout attributes added to the security element to configure the size and timeout for the Access Control List (ACL) cache. index-reader , index-writer , and index-merge child elements added to the indexing element. storage attribute added to the indexing element that specifies index storage options. path attribute added to the indexing element that specifies a directory when using file system storage for the index. bias-acquisition attribute added to the scattered-cache element that controls when nodes can acquire a bias on an entry. bias-lifespan attribute added to the scattered-cache element that specifies, in milliseconds, how long nodes can keep an acquired bias. merge-policy attribute added to the backups element that specifies an algorithm for resolving conflicts with cross-site replication. mode attribute added to the state-transfer child element for the backup . The mode attribute configures whether cross-site replication state transfer happens manually or automatically. INSERT_ABOVE , INSERT_BEFORE , and INSERT_BELOW attributes added to the stack.combine attribute for extending JGroups stacks with inheritance. Deprecated elements and attributes No elements or attributes are deprecated in Data Grid 8.2. Removed elements and attributes No elements or attributes are removed in Data Grid 8.2. 3.1.5. Changes to the Data Grid 8.3 configuration schema This topic lists changes to the Data Grid configuration schema between 8.2 and 8.3. Schema changes urn:infinispan:config:store:soft-index namespace is no longer available. Modified elements and attributes file-store element in the urn:infinispan:config namespace defaults to using soft-index file cache stores. single-file-store element is included in the urn:infinispan:config namespace but is now deprecated. New elements and attributes index and data elements are now available to configure how Data Grid stores indexes and data for file-based cache stores with the file-store element. open-files-limit and compaction-threshold attributes for the file-store element. cluster attribute added to the remote-sites and remote-site elements that lets you define global cluster names for cross-site communication. Note Global cluster names that you specify with the cluster attribute must be the same at all sites. accurate-size attribute added to the metrics element to enable calculations of the data set with the currentNumberOfEntries statistic. Important As of Data Grid 8.3 the currentNumberOfEntries statistic returns a value of -1 by default because it is an expensive operation to perform. touch attribute added to the expiration element that controls how timestamps get updated for entries in clustered caches with maximum idle expiration. The default value is SYNC and the attribute applies only to caches that use synchronous replication. Timestamps are updated asynchronously for caches that use asynchronous replication. lifespan attribute added to the strong-counter for attaching expiration values, in milliseconds. The default value is -1 which means strong consistent counters never expire. Note The lifespan attribute for strong counters is currently available as a Technology Preview. Deprecated elements and attributes The following elements and attributes are now deprecated: single-file-store element. max-entries and path attributes for the file-store element. Removed elements and attributes The following elements and attributes are no longer available in the Data Grid schema: remote-command-executor attribute for the transport element. capacity attribute for the distributed-cache element. 3.1.6. Changes to the Data Grid 8.4 configuration schema This topic lists changes to the Data Grid configuration schema between 8.3 and 8.4. Schema changes New elements and attributes default-max-results attribute added to the query element that lets you limits the number of results returned by a query. Applies to indexed, non-indexed, and hybrid queries. startup-mode attribute that lets you define which operation should Data Grid perform when the cache starts. The options are purge , reindex , auto or none . The default value is none . raft-members attribute that lets you define a list of raft members separated by space. Deprecated elements and attributes The following elements and attributes are now deprecated: scattered-cache element is now deprecated Removed elements and attributes The following elements and attributes are no longer available in the Data Grid schema: fetch-state store property is no longer available. You can remove the attribute from your xml configuration. 3.2. Eviction configuration Data Grid 8 simplifies eviction configuration in comparison with versions. However, eviction configuration has undergone numerous changes across different Data Grid versions, which means migration might not be straightforward. Note As of Data Grid 7.2, the memory element replaces the eviction element in the configuration. This section refers to eviction configuration with the memory element only. For information on migrating configuration that uses the eviction element, refer to the Data Grid 7.2 documentation. 3.2.1. Storage types Data Grid lets you control how to store entries in memory, with the following options: Store objects in JVM heap memory. Store bytes in native memory (off-heap). Store bytes in JVM heap memory. Changes in Data Grid 8 In 7.x versions, and 8.0, you use object , binary , and off-heap elements to configure the storage type. Starting with Data Grid 8.1, you use a storage attribute to store objects in JVM heap memory or as bytes in off-heap memory. To store bytes in JVM heap memory, you use the encoding element to specify a binary storage format for your data. Data Grid 7.x Data Grid 8 <memory><object /></memory> <memory /> <memory><off-heap /></memory> <memory storage="OFF_HEAP" /> <memory><binary /></memory> <encoding media-type="... " /> Object storage in Data Grid 8 By default, Data Grid 8.1 uses object storage (JVM heap): <distributed-cache> <memory /> </distributed-cache> You can also configure storage="HEAP" explicitly to store data as objects in JVM heap memory: <distributed-cache> <memory storage="HEAP" /> </distributed-cache> Off-heap storage in Data Grid 8 Set "OFF_HEAP" as the value of the storage attribute to store data as bytes in native memory: <distributed-cache> <memory storage="OFF_HEAP" /> </distributed-cache> Off-heap address count In versions, the address-count attribute for offheap lets you specify the number of pointers that are available in the hash map to avoid collisions. With Data Grid 8.1, address-count is no longer used and off-heap memory is dynamically re-sized to avoid collisions. Binary storage in Data Grid 8 Specify a binary storage format for cache entries with the encoding element: <distributed-cache> <!--Configure MediaType for entries with binary formats.--> <encoding media-type="application/x-protostream"/> <memory ... /> </distributed-cache> Note As a result of this change, Data Grid no longer stores primitives and String mixed with byte[] , but stores only byte[] . 3.2.2. Eviction threshold Eviction lets Data Grid control the size of the data container by removing entries when the container becomes larger than a configured threshold. In Data Grid 7.x and 8.0, you specify two eviction types that define the maximum limit for entries in the cache: COUNT measures the number of entries in the cache. MEMORY measures the amount of memory that all entries in the cache take up. Depending on the configuration you set, when either the count or the total amount of memory exceeds the maximum, Data Grid removes unused entries. Data Grid 7.x and 8.0 also use the size attribute that defines the size of the data container as a long. Depending on the storage type you configure, eviction occurs either when the number of entries or amount of memory exceeds the value of the size attribute. With Data Grid 8.1, the size attribute is deprecated along with COUNT and MEMORY . Instead, you configure the maximum size of the data container in one of two ways: Total number of entries with the max-count attribute. Maximum amount of memory, in bytes, with the max-size attribute. Eviction based on total number of entries <distributed-cache> <memory max-count="..." /> </distributed-cache> Eviction based on maximum amount of memory <distributed-cache> <memory max-size="..." /> </distributed-cache> 3.2.3. Eviction strategies Eviction strategies control how Data Grid performs eviction. Data Grid 7.x and 8.0 let you set one of the following eviction strategies with the strategy attribute: Strategy Description NONE Data Grid does not evict entries. This is the default setting unless you configure eviction. REMOVE Data Grid removes entries from memory so that the cache does not exceed the configured size. This is the default setting when you configure eviction. MANUAL Data Grid does not perform eviction. Eviction takes place manually by invoking the evict() method from the Cache API. EXCEPTION Data Grid does not write new entries to the cache if doing so would exceed the configured size. Instead of writing new entries to the cache, Data Grid throws a ContainerFullException . With Data Grid 8.1, you can use the same strategies as in versions. However, the strategy attribute is replaced with the when-full attribute. <distributed-cache> <memory when-full="<eviction_strategy>" /> </distributed-cache> Eviction algorithms With Data Grid 7.2, the ability to configure eviction algorithms was deprecated along with the Low Inter-Reference Recency Set (LIRS). From version 7.2 onwards, Data Grid includes the Caffeine caching library that implements a variation of the Least Frequently Used (LFU) cache replacement algorithm known as TinyLFU. For off-heap storage, Data Grid uses a custom implementation of the Least Recently Used (LRU) algorithm. 3.2.4. Eviction configuration comparison Compare eviction configuration between different Data Grid versions. Object storage and evict on number of entries 7.2 to 8.0 <memory> <object size="1000000" eviction="COUNT" strategy="REMOVE"/> </memory> 8.1 <memory max-count="1MB" when-full="REMOVE"/> Object storage and evict on amount of memory 7.2 to 8.0 <memory> <object size="1000000" eviction="MEMORY" strategy="MANUAL"/> </memory> 8.1 <memory max-size="1MB" when-full="MANUAL"/> Binary storage and evict on number of entries 7.2 to 8.0 <memory> <binary size="500000000" eviction="MEMORY" strategy="EXCEPTION"/> </memory> 8.1 <cache> <encoding media-type="application/x-protostream"/> <memory max-size="500 MB" when-full="EXCEPTION"/> </cache> Binary storage and evict on amount of memory 7.2 to 8.0 <memory> <binary size="500000000" eviction="COUNT" strategy="MANUAL"/> </memory> 8.1 <memory max-count="500 MB" when-full="MANUAL"/> Off-heap storage and evict on number of entries 7.2 to 8.0 <memory> <off-heap size="10000000" eviction="COUNT"/> </memory> 8.1 <memory storage="OFF_HEAP" max-count="10MB"/> Off-heap storage and evict on amount of memory 7.2 to 8.0 <memory> <off-heap size="1000000000" eviction="MEMORY"/> </memory> 8.1 <memory storage="OFF_HEAP" max-size="1GB"/> Additional resources Configuring Data Grid caches New eviction policy TinyLFU since RHDG 7.3 (Red Hat Knowledgebase) Product Documentation for Data Grid 7.2 3.3. Expiration configuration Expiration removes entries from caches based on their lifespan or maximum idle time. When migrating your configuration from Data Grid 7.x to 8, there are no changes that you need to make for expiration. The configuration remains the same: Lifespan expiration <expiration lifespan="1000" /> Max-idle expiration <expiration max-idle="1000" interval="120000" /> For Data Grid 7.2 and earlier, using max-idle with clustered caches had technical limitations that resulted in performance degradation. As of Data Grid 7.3, Data Grid sends touch commands to all owners in clustered caches when client read entries that have max-idle expiration values. This ensures that the entries have the same relative access time across the cluster. Data Grid 8 sends the same touch commands for max-idle expiration across clusters. However there are some technical considerations you should take into account before you start using max-idle . Refer to Configuring Data Grid caches to read more about how expiration works and to review how the touch commands affect performance with clustered caches. Additional resources Configuring Data Grid caches 3.4. Persistent cache stores In comparison with Data Grid 7.x, there are some changes to cache store configuration in Data Grid 8. Persistence SPI Data Grid 8.1 introduces the NonBlockingStore interface for cache stores. The NonBlockingStore SPI exposes methods that must never block the invoking thread. Cache stores that connect Data Grid to persistent data sources implement the NonBlockingStore interface. For custom cache store implementations that use blocking operations, Data Grid provides a BlockingManager utility class to handle those operations. The introduction of the NonBlockingStore interface deprecates the following interfaces: CacheLoader CacheWriter AdvancedCacheLoader AdvancedCacheWriter Custom cache stores Data Grid 8 lets you configure custom cache stores with the store element as in versions. The following changes apply: The singleton attribute is removed. Use shared=true instead. The segmented attribute is added and defaults to true . Segmented cache stores As of Data Grid 8, cache store configuration defaults to segmented="true" and applies to the following cache store elements: store file-store string-keyed-jdbc-store jpa-store remote-store rocksdb-store soft-index-file-store Note As of Data Grid 8.3, file-store element in cache configuration creates a soft index file-based store. For more information see File-based cache stores default to soft index . Single file cache stores The relative-to attribute for Single File cache stores is removed in Data Grid 8. If your cache store configuration includes this attribute, Data Grid ignores it and uses only the path attribute to configure store location. JDBC cache stores JDBC cache stores must include an xlmns namespace declaration, which was not required in some Data Grid 7.x versions. <persistence> <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:14.0" shared="true"> ... </persistence> JDBC connection factories Data Grid 7.x JDBC cache stores can use the following ConnectionFactory implementations to obtain a database connection: ManagedConnectionFactory SimpleConnectionFactory PooledConnectionFactory Data Grid 8 now use connections factories based on Agroal, which is the same as Red Hat JBoss EAP, to connect to databases. It is no longer possible to use c3p0.properties and hikari.properties files. Note As of Data Grid 8.3 JDBC connection factories are part of the org.infinispan.persistence.jdbc.common.configuration package. Segmentation JDBC String-Based cache store configuration that enables segmentation, which is now the default, must include the segmentColumnName and segmentColumnType parameters, as in the following programmatic examples: MySQL Example builder.table() .tableNamePrefix("ISPN") .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)") .dataColumnName("DATA_COLUMN").dataColumnType("VARBINARY(1000)") .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT") .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INTEGER") PostgreSQL Example builder.table() .tableNamePrefix("ISPN") .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)") .dataColumnName("DATA_COLUMN").dataColumnType("BYTEA") .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT") .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INTEGER"); Write-behind The thread-pool-size attribute for Write-Behind mode is removed in Data Grid 8. Removed cache stores and loaders Data Grid 7.3 deprecates the following cache stores and loaders that are no longer available in Data Grid 8: Cassandra Cache Store REST Cache Store LevelDB Cache Store CLI Cache Loader Cache store migrator Cache stores in versions of Data Grid store data in a binary format that is not compatible with Data Grid 8. Use the StoreMigrator utility to migrate data in persistent cache stores to Data Grid 8. 3.4.1. File-based cache stores default to soft index Including file-store persistence in cache configuration now creates a soft index file-based cache store, SoftIndexFileStore , instead of a single-file cache store, SingleFileStore . In Data Grid 8.2 and earlier, SingleFileStore was the default for file-based cache stores. If you are migrating or upgrading to Data Grid 8.3, any file-store configuration is automatically converted to a SoftIndexFileStore at server startup. When your configuration is converted to SoftIndexFileStore , it is not possible to revert back to SingleFileStore without modifying the configuration to ensure compatibility with the new store. 3.4.1.1. Declarative configuration Data Grid 8.2 and earlier <persistence> <soft-index-file-store xmlns="urn:infinispan:config:soft-index:12.1"> <index path="testCache/index" /> <data path="testCache/data" /> </soft-index-file-store> </persistence> Data Grid 8.3 and later <persistence> <file-store> <index path="testCache/index" /> <data path="testCache/data" /> </file-store> </persistence> 3.4.1.2. Programmatic configuration Data Grid 8.2 and earlier ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addStore(SoftIndexFileStoreConfigurationBuilder.class) .indexLocation("testCache/index"); .dataLocation("testCache/data") Data Grid 8.3 and later ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSoftIndexFileStore() .indexLocation("testCache/index") .dataLocation("testCache/data"); 3.4.1.3. Using single file cache stores with Data Grid 8.3 You can configure SingleFileStore cache stores with Data Grid 8.3 or later but Red Hat does not recommend doing so. You should use SoftIndexFileStore cache stores because they offer better scalability. Declarative <persistence passivation="false"> <single-file-store shared="false" preload="true" fetch-state="true" read-only="false"/> </persistence> Programmatic ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore(); 3.5. Data Grid cluster transport Data Grid uses JGroups technology to handle communication between clustered nodes. JGroups stack configuration elements and attributes have not significantly changed from Data Grid versions. As in versions, Data Grid provides preconfigured JGroups stacks that you can use as a starting point for building custom cluster transport configuration optimized for your network requirements. Likewise, Data Grid provides the ability to add JGroups stacks defined in external XML files to your infinispan.xml . Data Grid 8 has brought usability improvements to make cluster transport configuration easier: Inline stacks let you configure JGroups stacks directly within infinispan.xml using the jgroups element. Declare JGroups schemas within infinispan.xml . Preconfigured JGroups stacks for UDP and TCP protocols. Inheritance attributes that let you extend JGroups stacks to adjust specific protocols and properties. <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:14.0 https://infinispan.org/schemas/infinispan-config-14.0.xsd urn:infinispan:server:14.0 https://infinispan.org/schemas/infinispan-server-14.0.xsd urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd" 1 xmlns="urn:infinispan:config:14.0" xmlns:server="urn:infinispan:server:14.0"> <jgroups> 2 <stack name="xsite" extends="udp"> 3 <relay.RELAY2 site="LON" xmlns="urn:org:jgroups"/> <remote-sites default-stack="tcp"> <remote-site name="LON"/> <remote-site name="NYC"/> </remote-sites> </stack> </jgroups> <cache-container ...> ... </infinispan> 1 Declares the JGroups 4.2 schema within infinispan.xml . 2 Adds a JGroups element to contain custom stack definitions. 3 Defines a JGroups protocol stack for cross-site replication. 3.5.1. Transport security As in versions, Data Grid 8 uses the JGroups SYM_ENCRYPT and ASYM_ENCRYPT protocols to encrypt cluster communication. As of Data Grid you can also use a security realm that includes a keystore and trust store as a TLS server identity to secure cluster transport, for example: <cache-container> <transport server:security-realm="tls-transport"/> </cache-container> Node authentication In Data Grid 7.x, the JGroups SASL protocol enables nodes to authenticate against security realms in both embedded and remote server installations. As of Data Grid 8, it is not possible to configure node authentication against security realms. Likewise Data Grid 8 does not recommend using the JGroups AUTH protocol for authenticating clustered nodes. However, with embedded Data Grid installations, JGroups cluster transport includes a SASL configuration as part of the jgroups element. As in versions, the SASL configuration relies on JAAS notions, such as CallbackHandlers , to obtain certain information necessary for node authentication. 3.5.2. Retransmission requests Data Grid 8.2 changes the configuration for retransmission requests for the UNICAST3 and NAKACK2 protocols in the default JGroups stacks, as follows: The value of the xmit_interval property is increased from 100 milliseconds to 200 milliseconds. The max_xmit_req_size property now sets a maximum of 500 messages per re-transmission request, instead of a maximum of 8500 with UDP or 64000 with TCP. As part of your migration to Data Grid 8 you should adapt any custom JGroups stack configuration to use these recommended settings. Additional resources Data Grid Server Guide Using Embedded Data Grid Caches Data Grid Security Guide 3.6. Data Grid authorization Data Grid uses role-based access control (RBAC) to restrict access to data and cluster encryption to secure communication between nodes. Roles and Permissions Data Grid 8.2 provides a set of default users and permissions that you can use for RBAC, with the following changes: ClusterRoleMapper is the default mechanism that Data Grid uses to associate security principals to authorization roles. A new MONITOR permission allows user access to Data Grid statistics. A new CREATE permission that users need to create and delete resources such as caches and counters. Note CREATE replaces the ___schema_manager and \___script_manager roles that users required to create and remove Protobuf schema and server scripts in Data Grid 8.1 and earlier. When migrating to Data Grid 8.2, you should assign the deployer role to users who had the ___schema_manager and \___script_manager roles in Data Grid 8.1 or earlier. Use the command line interface (CLI) as follows: [//containers/default]> user roles grant --roles=deployer <user> cache manager permissions Table 3.2. Data Grid 8.1 Permission Function Description CONFIGURATION defineConfiguration Defines new cache configurations. LISTEN addListener Registers listeners against a cache manager. LIFECYCLE stop Stops the cache manager. ALL - Includes all cache manager permissions. Table 3.3. Data Grid 8.2 Permission Function Description CONFIGURATION defineConfiguration Defines new cache configurations. LISTEN addListener Registers listeners against a cache manager. LIFECYCLE stop Stops the cache manager. CREATE createCache , removeCache Create and remove container resources such as caches, counters, schemas, and scripts. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all cache manager permissions. Cache permissions Table 3.4. Data Grid 8.1 Permission Function Description READ get , contains Retrieves entries from a cache. WRITE put , putIfAbsent , replace , remove , evict Writes, replaces, removes, evicts data in a cache. EXEC distexec , streams Allows code execution against a cache. LISTEN addListener Registers listeners against a cache. BULK_READ keySet , values , entrySet , query Executes bulk retrieve operations. BULK_WRITE clear , putAll Executes bulk write operations. LIFECYCLE start , stop Starts and stops a cache. ADMIN getVersion , addInterceptor* , removeInterceptor , getInterceptorChain , getEvictionManager , getComponentRegistry , getDistributionManager , getAuthorizationManager , evict , getRpcManager , getCacheConfiguration , getCacheManager , getInvocationContextContainer , setAvailability , getDataContainer , getStats , getXAResource Allows access to underlying components and internal structures. ALL - Includes all cache permissions. ALL_READ - Combines the READ and BULK_READ permissions. ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Table 3.5. Data Grid 8.2 Permission Function Description READ get , contains Retrieves entries from a cache. WRITE put , putIfAbsent , replace , remove , evict Writes, replaces, removes, evicts data in a cache. EXEC distexec , streams Allows code execution against a cache. LISTEN addListener Registers listeners against a cache. BULK_READ keySet , values , entrySet , query Executes bulk retrieve operations. BULK_WRITE clear , putAll Executes bulk write operations. LIFECYCLE start , stop Starts and stops a cache. ADMIN getVersion , addInterceptor* , removeInterceptor , getInterceptorChain , getEvictionManager , getComponentRegistry , getDistributionManager , getAuthorizationManager , evict , getRpcManager , getCacheConfiguration , getCacheManager , getInvocationContextContainer , setAvailability , getDataContainer , getStats , getXAResource Allows access to underlying components and internal structures. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all cache permissions. ALL_READ - Combines the READ and BULK_READ permissions. ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Cache manager authorization As of Data Grid 8.2, you can include the authorization element in the cache-container security configuration as follows: <infinispan> <cache-container name="secured"> <security> <authorization/> 1 </security> </cache-container> </infinispan> 1 Enables security authorization for the cache manager with default roles and permissions. You can also define global authorization configuration as follows: <infinispan> <cache-container default-cache="secured" name="secured"> <security> <authorization> 1 <identity-role-mapper /> 2 <role name="admin" permissions="ALL" /> 3 <role name="reader" permissions="READ" /> <role name="writer" permissions="WRITE" /> <role name="supervisor" permissions="READ WRITE EXEC"/> </authorization> </security> </cache-container> </infinispan> 1 Requires user permission to control the cache manager lifecycle. 2 Specifies an implementation of PrincipalRoleMapper that maps Principals to roles. 3 Defines a set of roles and associated permissions. Implicit cache authorization Data Grid 8 improves usability by allowing caches to inherit authorization configuration from the cache-container so you do not need to explicitly configure roles and permissions for each cache. <local-cache name="secured"> <security> <authorization/> 1 </security> </local-cache> 1 Uses roles and permissions defined in the cache container. As of Data Grid 8.2, including the authorization element in the configuration uses the default roles and permissions to restrict access to that cache unless you define a set of custom global permissions. Additional resources Data Grid Security Guide
[ ".administration() .withFlags(AdminFlag.PERMANENT) 1 .getOrCreateCache(\"myPermanentCache\", \"org.infinispan.DIST_SYNC\");", "ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore() .location(\"/tmp/myDataStore\") .maxEntries(5000);", ".administration() .withFlags(AdminFlag.VOLATILE) 1 .getOrCreateCache(\"myTemporaryCache\", \"org.infinispan.DIST_SYNC\"); 2", "[//containers/default]> create cache --template=", "GET 127.0.0.1:11222/rest/v2/cache-managers/default/cache-configs/templates", "<infinispan> <cache-container> <distributed-cache name=\"myCache\" mode=\"SYNC\"> <encoding media-type=\"application/x-protostream\"/> </distributed-cache> </cache-container> </infinispan>", "WARN (main) [org.infinispan.encoding.impl.StorageConfigurationManager] ISPN000599: Configuration for cache 'mycache' does not define the encoding for keys or values. If you use operations that require data conversion or queries, you should configure the cache with a specific MediaType for keys or values.", "org.infinispan:type=CacheManager,name=\"default\",component=CacheContainerHealth", "<distributed-cache> <memory /> </distributed-cache>", "<distributed-cache> <memory storage=\"HEAP\" /> </distributed-cache>", "<distributed-cache> <memory storage=\"OFF_HEAP\" /> </distributed-cache>", "<distributed-cache> <!--Configure MediaType for entries with binary formats.--> <encoding media-type=\"application/x-protostream\"/> <memory ... /> </distributed-cache>", "<distributed-cache> <memory max-count=\"...\" /> </distributed-cache>", "<distributed-cache> <memory max-size=\"...\" /> </distributed-cache>", "<distributed-cache> <memory when-full=\"<eviction_strategy>\" /> </distributed-cache>", "<memory> <object size=\"1000000\" eviction=\"COUNT\" strategy=\"REMOVE\"/> </memory>", "<memory max-count=\"1MB\" when-full=\"REMOVE\"/>", "<memory> <object size=\"1000000\" eviction=\"MEMORY\" strategy=\"MANUAL\"/> </memory>", "<memory max-size=\"1MB\" when-full=\"MANUAL\"/>", "<memory> <binary size=\"500000000\" eviction=\"MEMORY\" strategy=\"EXCEPTION\"/> </memory>", "<cache> <encoding media-type=\"application/x-protostream\"/> <memory max-size=\"500 MB\" when-full=\"EXCEPTION\"/> </cache>", "<memory> <binary size=\"500000000\" eviction=\"COUNT\" strategy=\"MANUAL\"/> </memory>", "<memory max-count=\"500 MB\" when-full=\"MANUAL\"/>", "<memory> <off-heap size=\"10000000\" eviction=\"COUNT\"/> </memory>", "<memory storage=\"OFF_HEAP\" max-count=\"10MB\"/>", "<memory> <off-heap size=\"1000000000\" eviction=\"MEMORY\"/> </memory>", "<memory storage=\"OFF_HEAP\" max-size=\"1GB\"/>", "<expiration lifespan=\"1000\" />", "<expiration max-idle=\"1000\" interval=\"120000\" />", "<persistence> <string-keyed-jdbc-store xmlns=\"urn:infinispan:config:store:jdbc:14.0\" shared=\"true\"> </persistence>", "builder.table() .tableNamePrefix(\"ISPN\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"VARBINARY(1000)\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .segmentColumnName(\"SEGMENT_COLUMN\").segmentColumnType(\"INTEGER\")", "builder.table() .tableNamePrefix(\"ISPN\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BYTEA\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .segmentColumnName(\"SEGMENT_COLUMN\").segmentColumnType(\"INTEGER\");", "<persistence> <soft-index-file-store xmlns=\"urn:infinispan:config:soft-index:12.1\"> <index path=\"testCache/index\" /> <data path=\"testCache/data\" /> </soft-index-file-store> </persistence>", "<persistence> <file-store> <index path=\"testCache/index\" /> <data path=\"testCache/data\" /> </file-store> </persistence>", "ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addStore(SoftIndexFileStoreConfigurationBuilder.class) .indexLocation(\"testCache/index\"); .dataLocation(\"testCache/data\")", "ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSoftIndexFileStore() .indexLocation(\"testCache/index\") .dataLocation(\"testCache/data\");", "<persistence passivation=\"false\"> <single-file-store shared=\"false\" preload=\"true\" fetch-state=\"true\" read-only=\"false\"/> </persistence>", "ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore();", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:14.0 https://infinispan.org/schemas/infinispan-config-14.0.xsd urn:infinispan:server:14.0 https://infinispan.org/schemas/infinispan-server-14.0.xsd urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd\" 1 xmlns=\"urn:infinispan:config:14.0\" xmlns:server=\"urn:infinispan:server:14.0\"> <jgroups> 2 <stack name=\"xsite\" extends=\"udp\"> 3 <relay.RELAY2 site=\"LON\" xmlns=\"urn:org:jgroups\"/> <remote-sites default-stack=\"tcp\"> <remote-site name=\"LON\"/> <remote-site name=\"NYC\"/> </remote-sites> </stack> </jgroups> <cache-container ...> </infinispan>", "<cache-container> <transport server:security-realm=\"tls-transport\"/> </cache-container>", "[//containers/default]> user roles grant --roles=deployer <user>", "<infinispan> <cache-container name=\"secured\"> <security> <authorization/> 1 </security> </cache-container> </infinispan>", "<infinispan> <cache-container default-cache=\"secured\" name=\"secured\"> <security> <authorization> 1 <identity-role-mapper /> 2 <role name=\"admin\" permissions=\"ALL\" /> 3 <role name=\"reader\" permissions=\"READ\" /> <role name=\"writer\" permissions=\"WRITE\" /> <role name=\"supervisor\" permissions=\"READ WRITE EXEC\"/> </authorization> </security> </cache-container> </infinispan>", "<local-cache name=\"secured\"> <security> <authorization/> 1 </security> </local-cache>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/cache-migration
Chapter 4. Configuring an external MySQL database
Chapter 4. Configuring an external MySQL database Important When you externalize databases from a Red Hat 3scale API Management deployment, this means to provide isolation from the application and resilience against service disruptions at the database level. The resilience to service disruptions depends on the service level agreements (SLAs) provided by the infrastructure or platform provider where you host the databases. This is not offered by 3scale. For more details on externalizing of databases offered by your chosen deployment, see the associated documentation. Red Hat supports 3scale configurations that use an external MySQL database. However, the database itself is not within the scope of support. This guide provides information for externalizing the MySQL database. This is useful where there are several infrastructure issues, such as network or filesystem, using the default system-mysql pod. Prerequisites Access to an OpenShift Container Platform 4.x cluster using an account with administrator privileges. A 3scale instance installation on the OpenShift cluster. See Installing 3scale API Management on OpenShift . An external (that is not part of the 3scale installation) MySQL database, configured according to the External MySQL database configuration . To configure an external MySQL database, perform the steps outlined in the following sections: External MySQL database configuration Externalizing the MySQL database Rolling back 4.1. External MySQL database configuration When creating an external MySQL database, you need to configure it as explained below. MySQL database user The connection string that is used to configure the database connection (see System database secret to learn where to configure the connection string) for the external MySQL database must be in the following format: {DB_PASSWORD} and {DB_PORT} are optional. The user with username {DB_USER} must be created and granted all privileges to the database indicated as {DB_NAME} . Example commands for creating a user: In case of a new installation of 3scale, if the database {DB_NAME} does not exist, it will be created by the installation scripts. Binary logging configuration In case binary logging is enabled on the MySQL server, and the database user doesn't have the SUPER privilege, the global system variable log_bin_trust_function_creators must be set to 1 . This is required because 3scale uses stored procedures and triggers. Alternatively, if you choose to set SUPER privilege for the database user, note that it is deprecated as of MySQL 8.0 and will be removed in a future version of MySQL. See MySQL documentation for more information. 4.2. Externalizing the MySQL database Use the following steps to fully externalize the MySQL database. Warning This will cause downtime in the environment while the process is ongoing. Procedure Login to the OpenShift node where your 3scale On-premises instance is hosted and change to its project: Replace <user> , <url> , and <3scale-project> with your own credentials and the project name. Follow the steps below in the order shown to scale down all the pods. This will avoid loss of data. Stop 3scale On-premises From the OpenShift web console or from the command line interface (CLI), scale down all the deployment configurations to zero replicas in the following order: apicast-wildcard-router and zync for versions before 3scale 2.6 or zync-que and zync for 3scale 2.6 and above. apicast-staging and apicast-production . system-sidekiq , backend-cron , and system-searchd . 3scale 2.3 includes system-resque . system-app . backend-listener and backend-worker . backend-redis , system-memcache , system-mysql , system-redis , and zync-database . The following example shows how to perform this in the CLI for apicast-wildcard-router and zync : Note The deployment configuration for each step can be scaled down at the same time. For example, you could scale down apicast-wildcard-router and zync together. However, it is better to wait for the pods from each step to terminate before scaling down the ones that follow. The 3scale instance will be completely inaccessible until it is fully started again. To confirm that no pods are running on the 3scale project use the following command: The command should return No resources found . Scale up the database level pods again using the following command: Ensure that you are able to login to the external MySQL database through the system-mysql pod before proceeding with the steps: <system_mysql_pod_id> : The identifier of the system-mysql pod. The user should always be root. For more information see External MySQL database configuration . The CLI will now display mysql> . Type exit , then press return . Type exit again at the prompt to go back to the OpenShift node console. Perform a full MySQL dump using the following command: Replace <system_mysql_pod_id> with your unique system-mysql pod ID . Validate that the file system-mysql-dump.sql contains a valid MySQL level dump as in the following example: Scale down the system-mysql pod and leave it with 0 (zero) replicas: Find the base64 equivalent of the URL mysql2://root:<password>@<host>/system , replacing <password> and <host> accordingly: Create a default 'user'@'%' on the remote MySQL database. It only needs to have SELECT privileges. Also find its base64 equivalents: Replace <password> with the password for 'user'@'%' . Perform a backup and edit the OpenShift secret system-database : URL : Replace it with the value from [step-8] . DB_USER and DB_PASSWORD : Use the values from the step for both. Send system-mysql-dump.sql to the remote database server and import the dump into it. Use the command to import it: Use the command below to send system-mysql-dump.sql to the remote database server and import the dump into the server: Ensure that a new database called system was created: Use the following instructions to Start 3scale On-premises , which scales up all the pods in the correct order. Start 3scale On-premises backend-redis , system-memcache , system-mysql , system-redis , and zync-database . backend-listener and backend-worker . system-app . system-sidekiq , backend-cron , and system-searchd 3scale 2.3 includes system-resque . apicast-staging and apicast-production . apicast-wildcard-router and zync for versions before 3scale 2.6 or zync-que and zync for 3scale 2.6 and above. The following example shows how to perform this in the CLI for backend-redis , system-memcache , system-mysql , system-redis , and zync-database : The system-app pod should now be up and running without any issues. After validation, scale back up the other pods in the order shown . Backup the system-mysql DeploymentConfig object. You may delete after a few days once you are sure everything is running properly. Deleting system-mysql DeploymentConfig avoids any future confusion if this procedure is done again in the future. 4.3. Rolling back Perform a rollback procedure if the system-app pod is not fully back online and the root cause for it could not be determined or addressed after following step 14 . Edit the secret system-database using the original values from system-database-orig.bkp.yml . See [step-10] : USD oc edit secret system-database Replace URL , DB_USER , and DB_PASSWORD with their original values. Scale down all the pods and then scale them back up again, including system-mysql . The system-app pod and the other pods to be started after it should be up and running again. Run the following command to confirm all pods are back up and running: USD oc get pods -n <3scale-project> 4.4. Additional information For more information about 3scale and MySQL database support, see Red Hat 3scale API Management Supported Configurations .
[ "mysql2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}", "CREATE USER 'exampleuser'@'%' IDENTIFIED BY 'examplepass'; GRANT ALL PRIVILEGES ON exampledb.* to 'exampleuser'@'%';", "oc login -u <user> <url> oc project <3scale-project>", "oc scale deployment/apicast-wildcard-router --replicas=0 oc scale deployment/zync --replicas=0", "oc get pods -n <3scale_namespace>", "oc scale deployment/{backend-redis,system-memcache,system-mysql,system-redis,zync-database} --replicas=1", "oc rsh system-mysql-<system_mysql_pod_id> mysql -u root -p -h <host>", "oc rsh system-mysql-<system_mysql_pod_id> /bin/bash -c \"mysqldump -u root --single-transaction --routines --triggers --all-databases\" > system-mysql-dump.sql", "head -n 10 system-mysql-dump.sql -- MySQL dump 10.13 Distrib 8.0, for Linux (x86_64) -- -- Host: localhost Database: -- ------------------------------------------------------ -- Server version 8.0 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */;", "oc scale deployment/system-mysql --replicas=0", "echo \"mysql2://root:<password>@<host>/system\" | base64", "echo \"user\" | base64 echo \"<password>\" | base64", "oc get secret system-database -o yaml > system-database-orig.bkp.yml oc edit secret system-database", "mysql -u root -p < system-mysql-dump.sql", "mysql -u root -p -se \"SHOW DATABASES\"", "oc scale deployment/backend-redis --replicas=1 oc scale deployment/system-memcache --replicas=1 oc scale deployment/system-mysql --replicas=1 oc scale deployment/system-redis --replicas=1 oc scale deployment/zync-database --replicas=1", "oc edit secret system-database", "oc get pods -n <3scale-project>" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/installing_red_hat_3scale_api_management/configure-external-mysql-database
Chapter 29. Overview of NVMe over fabric devices
Chapter 29. Overview of NVMe over fabric devices Non-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid state drives. Use the following types of fabric transport to configure NVMe over fabric devices: NVMe over fabrics using Remote Direct Memory Access (RDMA). For information on how to configure NVMe/RDMA, see Section 29.1, "NVMe over fabrics using RDMA" . NVMe over fabrics using Fibre Channel (FC). For information on how to configure FC-NVMe, see Section 29.2, "NVMe over fabrics using FC" . When using FC and RDMA, the solid-state drive does not have to be local to your system; it can be configured remotely through a FC or RDMA controller. 29.1. NVMe over fabrics using RDMA The following sections describe how to deploy an NVMe over RDMA (NVMe/RDMA) initiator configuration. 29.1.1. Configuring an NVMe over RDMA client Use this procedure to configure an NVMe/RDMA client using the NVMe management command line interface ( nvme-cli ). Install the nvme-cli package: Load the nvme-rdma module if it is not loaded: Discover available subsystems on the NVMe target: Connect to the discovered subsystems: Replace testnqn with the NVMe subsystem name. Replace 172.31.0.202 with the target IP address. Replace 4420 with the port number. List the NVMe devices that are currently connected: Optional: Disconnect from the target: Additional resources For more information, see the nvme man page and the NVMe-cli Github repository .
[ "yum install nvme-cli", "modprobe nvme-rdma", "nvme discover -t rdma -a 172.31.0.202 -s 4420 Discovery Log Number of Records 1, Generation counter 2 =====Discovery Log Entry 0====== trtype: rdma adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 1 trsvcid: 4420 subnqn: testnqn traddr: 172.31.0.202 rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0x0000", "nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home nvme0n1 # cat /sys/class/nvme/nvme0/transport rdma", "nvme list", "nvme disconnect -n testnqn NQN:testnqn disconnected 1 controller(s) # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-overview-of-NVMe-over-fabric-devices
Chapter 12. Trusted Container test
Chapter 12. Trusted Container test The Trusted Container test checks if Red Hat recognizes the Red Hat OpenStack Platform (RHOSP) plugin/driver container. The test also verifies whether the container is provided by Red Hat or you. The certified container image reduces the number of sources a customer must utilize for deployment, and it also ensures all the component included in solution stack are from a trusted source. Working of RHOSP certification testing During RHOSP certification testing, the Trusted Container Test captures information about the installed and running containers. After the information is captured, the test queries the Red Hat certification services to determine if the containers are recognized and certified. Requirements for Partners If your driver is shipped as part of RHOSP (In Tree) then they are expected to only run the Trusted Container Test because container image is already certified. However, if you ship their own container image (Out of Tree), as a prerequisite, the Partner must certify the container images with Red Hat Connect . For more information on container image certification, see Partner Integration guide . In Red Hat connect when a Partner creates a new product request for RHOSP 13, they can only select the Release Category as Tech-Preview. You can execute the Trusted Container Test after the container image is certified on Red Hat Connect . After the Trusted Container Test is completed successfully, the Partner can choose the General Availability(GA) option. Success criteria You have received a container report that shows running and non-running containers on the overcloud controller node. The report shows that the RHOSP services like cinder, manila, and neutron are installed and running. Based on an RHOSP certification testing, the running container can either be an RHOSP certified or Red Hat Certified container.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_policy_guide/assembly-trusted-container-test_rhosp-wf-openstack-configaration-test
4.2. Deploy JBoss Data Grid in JBoss EAP (Remote Client-Server Mode)
4.2. Deploy JBoss Data Grid in JBoss EAP (Remote Client-Server Mode) Red Hat JBoss Data Grid provides a set of modules for Red Hat JBoss Enterprise Application Platform 6.x. Using these modules means that JBoss Data Grid libraries do not need to be included in the user deployment. To avoid conflicts with the Infinispan modules that are already included with JBoss EAP, the JBoss Data Grid modules are placed within a separate slot and identified by the JBoss Data Grid version ( major . minor ). Note The JBoss EAP modules are not included in JBoss EAP. Instead, navigate to the Customer Support Portal at http://access.redhat.com to download these modules from the Red Hat JBoss Data Grid downloads page. To deploy JBoss Data grid in JBoss EAP, add dependencies from the JBoss Data Grid module to the application's classpath (the JBoss EAP deployer) in one of the following ways: Add a dependency to the jboss-deployment-structure.xml file. Add a dependency to the MANIFEST.MF file. Add a Dependency to the jboss-deployment-structure.xml File Add the following configuration to the jboss-deployment-structure.xml file: Note For details about the jboss-deployment-structure.xml file, see the Red Hat JBoss Enterprise Application Platform documentation. Add a Dependency to the MANIFEST.MF File. Add a dependency to the MANIFEST.MF files as follows: Example 4.2. Example MANIFEST.MF File The first line remains the same as the example. Depending on the dependency required, add one of the following to the second line of the file: Basic Hot Rod client: Hot Rod client with Remote Query functionality: Report a bug 4.2.1. Using Custom Classes with the Hot Rod client Either of the following two methods may be used to use custom classes with the Hot Rod client: Option 1: Reference the deployment's class loader in the configuration builder for the Hot Rod client, as seen in the below example: Example 4.3. Referencing the custom class loader in the ConfigurationBuilder instance Option 2: Install the custom classes as their own module within JBoss EAP, and add a dependency on the newly created module should be added to the JBoss Data Grid module at USD{EAP_HOME}/modules/system/layers/base/org/infinispan/commons/jdg-6.x/module.xml . Report a bug
[ "<jboss-deployment-structure xmlns=\"urn:jboss:deployment-structure:1.2\"> <deployment> <dependencies> <module name=\"org.infinispan.commons\" slot=\"jdg-6.6\" services=\"export\"/> <module name=\"org.infinispan.client.hotrod\" slot=\"jdg-6.6\" services=\"export\"/> </dependencies> </deployment> </jboss-deployment-structure>", "Manifest-Version: 1.0 Dependencies: org.infinispan.commons:jdg-6.6 services, org.infinispan.client.hotrod:jdg-6.6 services", "org.infinispan.commons:jdg-6.6 services, org.infinispan.client.hotrod:jdg-6.6 services", "org.infinispan.commons:jdg-6.6 services, org.infinispan.client.hotrod:jdg-6.6 services, org.infinispan.query.dsl:jdg-6.6 services, org.jboss.remoting3", "import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; [...] ConfigurationBuilder config = new ConfigurationBuilder(); config.marshaller(new GenericJBossMarshaller(Thread.currentThread().getContextClassLoader()));" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/deploy_jboss_data_grid_in_jboss_eap_remote_client-server_mode
Chapter 18. Supported kdump configurations and targets
Chapter 18. Supported kdump configurations and targets The kdump mechanism is a feature of the Linux kernel that generates a crash dump file when a kernel crash occurs. The kernel dump file has critical information that helps to analyze and determine the root cause of a kernel crash. The crash can be because of various factors, hardware issues or third-party kernel modules problems, to name a few. By using the provided information and procedures, you can perform the following actions: Identify the supported configurations and targets for your RHEL 8 systems. Configure kdump. Verify kdump operation. 18.1. Memory requirements for kdump For kdump to capture a kernel crash dump and save it for further analysis, a part of the system memory should be permanently reserved for the capture kernel. When reserved, this part of the system memory is not available to the main kernel. The memory requirements vary based on certain system parameters. One of the major factors is the system's hardware architecture. To identify the exact machine architecture, such as Intel 64 and AMD64, also known as x86_64, and print it to standard output, use the following command: With the stated list of minimum memory requirements, you can set the appropriate memory size to automatically reserve a memory for kdump on the latest available versions. The memory size depends on the system's architecture and total available physical memory. Table 18.1. Minimum amount of reserved memory required for kdump Architecture Available Memory Minimum Reserved Memory AMD64 and Intel 64 ( x86_64 ) 1 GB to 4 GB 192 MB of RAM 4 GB to 64 GB 256 MB of RAM 64 GB and more 512 MB of RAM 64-bit ARM architecture ( arm64 ) 2 GB and more 480 MB of RAM IBM Power Systems ( ppc64le ) 2 GB to 4 GB 384 MB of RAM 4 GB to 16 GB 512 MB of RAM 16 GB to 64 GB 1 GB of RAM 64 GB to 128 GB 2 GB of RAM 128 GB and more 4 GB of RAM IBM Z ( s390x ) 1 GB to 4 GB 192 MB of RAM 4 GB to 64 GB 256 MB of RAM 64 GB and more 512 MB of RAM On many systems, kdump is able to estimate the amount of required memory and reserve it automatically. This behavior is enabled by default, but only works on systems that have more than a certain amount of total available memory, which varies based on the system architecture. Important The automatic configuration of reserved memory based on the total amount of memory in the system is a best effort estimation. The actual required memory might vary due to other factors such as I/O devices. Using not enough of memory might cause debug kernel unable to boot as a capture kernel in the case of kernel panic. To avoid this problem, increase the crash kernel memory sufficiently. Additional resources How has the crashkernel parameter changed between RHEL8 minor releases? (Red Hat Knowledgebase) Technology capabilities and limits tables Minimum threshold for automatic memory reservation 18.2. Minimum threshold for automatic memory reservation By default, the kexec-tools utility configures the crashkernel command line parameter and reserves a certain amount of memory for kdump . On some systems however, it is still possible to assign memory for kdump either by using the crashkernel=auto parameter in the boot loader configuration file, or by enabling this option in the graphical configuration utility. For this automatic reservation to work, a certain amount of total memory needs to be available in the system. The memory requirement varies based on the system's architecture. If the system memory is less than the specified threshold value, you must configure the memory manually. Table 18.2. Minimum amount of memory required for automatic memory reservation Architecture Required Memory AMD64 and Intel 64 ( x86_64 ) 2 GB IBM Power Systems ( ppc64le ) 2 GB IBM Z ( s390x ) 4 GB Note The crashkernel=auto option in the boot command line is no longer supported on RHEL 9 and later releases. 18.3. Supported kdump targets When a kernel crash occurs, the operating system saves the dump file on the configured or default target location. You can save the dump file either directly to a device, store as a file on a local file system, or send the dump file over a network. With the following list of dump targets, you can know the targets that are currently supported or not supported by kdump . Table 18.3. kdump targets on RHEL 8 Target type Supported Targets Unsupported Targets Physical storage Logical Volume Manager (LVM). Thin provisioning volume. Fibre Channel (FC) disks such as qla2xxx , lpfc , bnx2fc , and bfa . An iSCSI software-configured logical device on a networked storage server. The mdraid subsystem as a software RAID solution. Hardware RAID such as cciss , hpsa , megaraid_sas , mpt2sas , and aacraid . SCSI and SATA disks. iSCSI and HBA offloads. Hardware FCoE such as qla2xxx and lpfc . BIOS RAID. Software iSCSI with iBFT . Currently supported transports are bnx2i , cxgb3i , and cxgb4i . Software iSCSI with a hybrid device driver such as be2iscsi . Fibre Channel over Ethernet (FCoE). Legacy IDE . GlusterFS servers. GFS2 file system. Clustered Logical Volume Manager (CLVM). High availability LVM volumes (HA-LVM). Network Hardware using kernel modules: tg3 , igb , ixgbe , sfc , e1000e , bna , cnic , netxen_nic , qlge , bnx2x , bnx , qlcnic , be2net , enic , virtio-net , ixgbevf , igbvf . IPv4 protocol. Network bonding on different devices, such as Ethernet devices or VLAN. VLAN network. Network Bridge. Network Teaming. Tagged VLAN and VLAN over a bond. Bridge network over bond, team, and VLAN. IPv6 protocol. Wireless connections. InfiniBand networks. VLAN network over bridge and team. Hypervisor Kernel-based virtual machines (KVM). Xen hypervisor in certain configurations only. VMware ESXi 4.1 and 5.1. Hyper-V 2012 R2 on RHEL Gen1 UP Guest only. File systems The ext[234], XFS, and NFS file systems. The Btrfs file system. Firmware BIOS-based systems. UEFI Secure Boot. Additional resources Configuring the kdump target 18.4. Supported kdump filtering levels To reduce the size of the dump file, kdump uses the makedumpfile core collector to compress the data and also exclude unwanted information, for example, you can remove hugepages and hugetlbfs pages by using the -8 level. The levels that makedumpfile currently supports can be seen in the table for Filtering levels for `kdump` . Table 18.4. Filtering levels for kdump Option Description 1 Zero pages 2 Cache pages 4 Cache private 8 User pages 16 Free pages Additional resources Configuring the kdump core collector 18.5. Supported default failure responses By default, when kdump fails to create a core dump, the operating system reboots. However, you can configure kdump to perform a different operation in case it fails to save the core dump to the primary target. Table 18.5. Failure responses for kdump Option Description dump_to_rootfs Attempt to save the core dump to the root file system. This option is especially useful in combination with a network target: if the network target is unreachable, this option configures kdump to save the core dump locally. The system is rebooted afterwards. reboot Reboot the system, losing the core dump in the process. halt Halt the system, losing the core dump in the process. poweroff Power off the system, losing the core dump in the process. shell Run a shell session from within the initramfs, allowing the user to record the core dump manually. final_action Enable additional operations such as reboot , halt , and poweroff actions after a successful kdump or when shell or dump_to_rootfs failure action completes. The default final_action option is reboot . Additional resources Configuring the kdump default failure responses 18.6. Using final_action parameter When kdump succeeds or if kdump fails to save the vmcore file at the configured target, you can perform additional operations like reboot , halt , and poweroff by using the final_action parameter. If the final_action parameter is not specified, reboot is the default response. Procedure To configure final_action , edit the /etc/kdump.conf file and add one of the following options: final_action reboot final_action halt final_action poweroff Restart the kdump service for the changes to take effect. 18.7. Using failure_action parameter The failure_action parameter specifies the action to perform when a dump fails in the event of a kernel crash. The default action for failure_action is reboot that reboots the system. The parameter recognizes the following actions to take: reboot Reboots the system after a dump failure. dump_to_rootfs Saves the dump file on a root file system when a non-root dump target is configured. halt Halts the system. poweroff Stops the running operations on the system. shell Starts a shell session inside initramfs , from which you can manually perform additional recovery actions. Procedure: To configure an action to take if the dump fails, edit the /etc/kdump.conf file and specify one of the failure_action options: failure_action reboot failure_action halt failure_action poweroff failure_action shell failure_action dump_to_rootfs Restart the kdump service for the changes to take effect.
[ "uname -m", "kdumpctl restart", "kdumpctl restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/supported-kdump-configurations-and-targets_managing-monitoring-and-updating-the-kernel
Chapter 2. Configuring your firewall
Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Allowlist the following registry URLs: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Allowlist any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that provide the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to determine the exact endpoints to allow for the regions that you use. AWS *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must allowlist the following URLs: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to determine the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to determine the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Azure management.azure.com 443 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installation_configuration/configuring-firewall
Chapter 17. Upgrade command overview
Chapter 17. Upgrade command overview The upgrade process involves different commands that you run at certain stages of process. Important This section only contains information about each command. You must run these commands in a specific order and provide options specific to your overcloud. Wait until you receive instructions to run these commands at the appropriate step. 17.1. openstack overcloud upgrade prepare This command performs the initial preparation steps for the overcloud upgrade, which includes replacing the current overcloud plan on the undercloud with the new OpenStack Platform 16.2 overcloud plan and your updated environment files. This command functions similar to the openstack overcloud deploy command and uses many of the same options. 17.2. openstack overcloud upgrade run This command performs the upgrade process. Director creates a set of Ansible playbooks based on the new OpenStack Platform 16.2 overcloud plan and runs the fast forward tasks on the entire overcloud. This includes running the upgrade process through each OpenStack Platform version from 13 to 16.2. In addition to the standard upgrade process, this command can perform a Leapp upgrade of the operating system on overcloud nodes. Run these tasks using the --tags option. Upgrade task tags for Leapp system_upgrade Task that combines tasks from system_upgrade_prepare , system_upgrade_run , and system_upgrade_reboot . system_upgrade_prepare Tasks to prepare for the operating system upgrade with Leapp. system_upgrade_run Tasks to run Leapp and upgrade the operating system. system_upgrade_reboot Tasks to reboot a system and complete the operating system upgrade. Upgrade task tags for workload migration nova_hybrid_state Task that sets up temporary OpenStack Platform 16.2 containers on Compute nodes to facilitate workload migration during the upgrade. 17.3. openstack overcloud external-upgrade run This command performs upgrade tasks outside the standard upgrade process. Director creates a set of Ansible playbooks based on the new OpenStack Platform 16.2 overcloud plan and you run specific tasks using the --tags option. External task tags for container management container_image_prepare Tasks for pulling container images to the undercloud registry and preparing the images for the overcloud to use. External task tags for Ceph Storage upgrades If your deployment uses a Red Hat Ceph Storage cluster that was deployed using director, you can use the following tags: ceph Tasks to install Red Hat Ceph Storage using ceph-ansible playbooks. ceph_systemd Tasks to convert Red Hat Ceph Storage systemd unit files to use podman management. If you are upgrading with external Ceph deployments, you can skip the tasks that use the ceph and ceph_systemd tags. External task tags for database transfer system_upgrade_cleanup Tasks to clean storage directories related to system_upgrade_transfer_data tasks. system_upgrade_stop_services Tasks to shut down all services. system_upgrade_transfer_data Tasks to shut down all services and perform a database transfer to the bootstrap node. 17.4. openstack overcloud upgrade converge This command performs the final step in the overcloud upgrade. This final step synchronizes the overcloud heat stack with the OpenStack Platform 16.2 overcloud plan and your updated environment files. This process ensures that the resulting overcloud matches the configuration of a new OpenStack Platform 16.2 overcloud. This command is similar to the openstack overcloud deploy command and uses many of the same options. 17.5. Overcloud node upgrade workflow When you perform an upgrade on each overcloud node, you must consider the following aspects to determine the correct commands to run at the relevant stage in the upgrade: Controller Services Does the node contain Pacemaker services? You must first upgrade the bootstrap node in order to start a database transfer and launch temporary containers that facilitate migration during the transition from Red Hat OpenStack 13 to 16.2. During the bootstrap Controller node upgrade process, a new Pacemaker cluster is created and new Red Hat OpenStack 16.2 containers are started on the node, while the remaining Controller nodes are still running on Red Hat OpenStack 13. After upgrading the bootstrap node, you must upgrade each additional node with Pacemaker services and ensure that each node joins the new Pacemaker cluster started with the bootstrap node. The process for upgrading split-service Controller nodes without Pacemaker does not require these additional steps. Compute Services Is the node a Compute node? If the node does contain Compute services, you must migrate virtual machines from the node to ensure maximum availability. A Compute node in this situation includes any node designed to host virtual machines. This definition includes the follow Compute nodes types: Regular Compute nodes Compute nodes with Hyper-Converged Infrastructure (HCI) Compute nodes with Network Function Virtualization technologies such as Data Plane Development Kit (DPDK) or Single Root Input/Output Virtualization (SR-IOV) Real Time Compute nodes Ceph Storage Services Does the node contain any Ceph Storage services? You must convert the systemd unit files for any containerized Ceph Storage services on the node to use podman instead of docker . This applies to the following node types: Ceph Storage OSD nodes Controller nodes with Ceph MON services Split-Controller Ceph MON nodes Compute nodes with Hyper-Converged Infrastructure (HCI) Workflow Use the following workflow diagram to identify the correct update path for specific nodes:
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/upgrade-command-overview_upgrading-overcloud
Chapter 25. file
Chapter 25. file The path to the log file from which the collector reads this log entry. Normally, this is a path in the /var/log file system of a cluster node. Data type text
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/file
15.5. Live KVM Migration with virsh
15.5. Live KVM Migration with virsh A guest virtual machine can be migrated to another host physical machine with the virsh command. The migrate command accepts parameters in the following format: Note that the --live option may be eliminated when live migration is not required. Additional options are listed in Section 15.5.2, "Additional Options for the virsh migrate Command" . The GuestName parameter represents the name of the guest virtual machine which you want to migrate. The DestinationURL parameter is the connection URL of the destination host physical machine. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running. Note The DestinationURL parameter for normal migration and peer2peer migration has different semantics: normal migration: the DestinationURL is the URL of the target host physical machine as seen from the source guest virtual machine. peer2peer migration: DestinationURL is the URL of the target host physical machine as seen from the source host physical machine. Once the command is entered, you will be prompted for the root password of the destination system. Important Name resolution must be working on both sides (source and destination) in order for migration to succeed. Each side must be able to find the other. Make sure that you can ping one side to the other to check that the name resolution is working. Example: live migration with virsh This example migrates from host1.example.com to host2.example.com . Change the host physical machine names for your environment. This example migrates a virtual machine named guest1-rhel6-64 . This example assumes you have fully configured shared storage and meet all the prerequisites (listed here: Migration requirements ). Verify the guest virtual machine is running From the source system, host1.example.com , verify guest1-rhel6-64 is running: Migrate the guest virtual machine Execute the following command to live migrate the guest virtual machine to the destination, host2.example.com . Append /system to the end of the destination URL to tell libvirt that you need full access. Once the command is entered you will be prompted for the root password of the destination system. Wait The migration may take some time depending on load and the size of the guest virtual machine. virsh only reports errors. The guest virtual machine continues to run on the source host physical machine until fully migrated. Verify the guest virtual machine has arrived at the destination host From the destination system, host2.example.com , verify guest1-rhel7-64 is running: The live migration is now complete. Note libvirt supports a variety of networking methods including TLS/SSL, UNIX sockets, SSH, and unencrypted TCP. For more information on using other methods, see Chapter 18, Remote Management of Guests . Note Non-running guest virtual machines can be migrated using the following command: 15.5.1. Additional Tips for Migration with virsh It is possible to perform multiple, concurrent live migrations where each migration runs in a separate command shell. However, this should be done with caution and should involve careful calculations as each migration instance uses one MAX_CLIENT from each side (source and target). As the default setting is 20, there is enough to run 10 instances without changing the settings. Should you need to change the settings, see the procedure Procedure 15.1, "Configuring libvirtd.conf" . Open the libvirtd.conf file as described in Procedure 15.1, "Configuring libvirtd.conf" . Look for the Processing controls section. Change the max_clients and max_workers parameters settings. It is recommended that the number be the same in both parameters. The max_clients will use 2 clients per migration (one per side) and max_workers will use 1 worker on the source and 0 workers on the destination during the perform phase and 1 worker on the destination during the finish phase. Important The max_clients and max_workers parameters settings are affected by all guest virtual machine connections to the libvirtd service. This means that any user that is using the same guest virtual machine and is performing a migration at the same time will also obey the limits set in the max_clients and max_workers parameters settings. This is why the maximum value needs to be considered carefully before performing a concurrent live migration. Important The max_clients parameter controls how many clients are allowed to connect to libvirt. When a large number of containers are started at once, this limit can be easily reached and exceeded. The value of the max_clients parameter could be increased to avoid this, but doing so can leave the system more vulnerable to denial of service (DoS) attacks against instances. To alleviate this problem, a new max_anonymous_clients setting has been introduced in Red Hat Enterprise Linux 7.0 that specifies a limit of connections which are accepted but not yet authenticated. You can implement a combination of max_clients and max_anonymous_clients to suit your workload. Save the file and restart the service. Note There may be cases where a migration connection drops because there are too many ssh sessions that have been started, but not yet authenticated. By default, sshd allows only 10 sessions to be in a "pre-authenticated state" at any time. This setting is controlled by the MaxStartups parameter in the sshd configuration file (located here: /etc/ssh/sshd_config ), which may require some adjustment. Adjusting this parameter should be done with caution as the limitation is put in place to prevent DoS attacks (and over-use of resources in general). Setting this value too high will negate its purpose. To change this parameter, edit the file /etc/ssh/sshd_config , remove the # from the beginning of the MaxStartups line, and change the 10 (default value) to a higher number. Remember to save the file and restart the sshd service. For more information, see the sshd_config man page. 15.5.2. Additional Options for the virsh migrate Command In addition to --live , virsh migrate accepts the following options: --direct - used for direct migration --p2p - used for peer-to-peer migration --tunneled - used for tunneled migration --offline - migrates domain definition without starting the domain on destination and without stopping it on source host. Offline migration may be used with inactive domains and it must be used with the --persistent option. --persistent - leaves the domain persistent on destination host physical machine --undefinesource - undefines the domain on the source host physical machine --suspend - leaves the domain paused on the destination host physical machine --change-protection - enforces that no incompatible configuration changes will be made to the domain while the migration is underway; this flag is implicitly enabled when supported by the hypervisor, but can be explicitly used to reject the migration if the hypervisor lacks change protection support. --unsafe - forces the migration to occur, ignoring all safety procedures. --verbose - displays the progress of migration as it is occurring --compressed - activates compression of memory pages that have to be transferred repeatedly during live migration. --abort-on-error - cancels the migration if a soft error (for example I/O error) happens during the migration. --domain [name] - sets the domain name, id or uuid. --desturi [URI] - connection URI of the destination host as seen from the client (normal migration) or source (p2p migration). --migrateuri [URI] - the migration URI, which can usually be omitted. --graphicsuri [URI] - graphics URI to be used for seamless graphics migration. --listen-address [address] - sets the listen address that the hypervisor on the destination side should bind to for incoming migration. --timeout [seconds] - forces a guest virtual machine to suspend when the live migration counter exceeds N seconds. It can only be used with a live migration. Once the timeout is initiated, the migration continues on the suspended guest virtual machine. --dname [newname] - is used for renaming the domain during migration, which also usually can be omitted --xml [filename] - the filename indicated can be used to supply an alternative XML file for use on the destination to supply a larger set of changes to any host-specific portions of the domain XML, such as accounting for naming differences between source and destination in accessing underlying storage. This option is usually omitted. --migrate-disks [disk_identifiers] - this option can be used to select which disks are copied during the migration. This allows for more efficient live migration when copying certain disks is undesirable, such as when they already exist on the destination, or when they are no longer useful. [disk_identifiers] should be replaced by a comma-separated list of disks to be migrated, identified by their arguments found in the <target dev= /> line of the Domain XML file. In addition, the following commands may help as well: virsh migrate-setmaxdowntime [domain] [downtime] - will set a maximum tolerable downtime for a domain which is being live-migrated to another host. The specified downtime is in milliseconds. The domain specified must be the same domain that is being migrated. virsh migrate-compcache [domain] --size - will set and or get the size of the cache in bytes which is used for compressing repeatedly transferred memory pages during a live migration. When the --size is not used the command displays the current size of the compression cache. When --size is used, and specified in bytes, the hypervisor is asked to change compression to match the indicated size, following which the current size is displayed. The --size argument is supposed to be used while the domain is being live migrated as a reaction to the migration progress and increasing number of compression cache misses obtained from the domjobinfo . virsh migrate-setspeed [domain] [bandwidth] - sets the migration bandwidth in Mib/sec for the specified domain which is being migrated to another host. virsh migrate-getspeed [domain] - gets the maximum migration bandwidth that is available in Mib/sec for the specified domain. For more information, see Migration Limitations or the virsh man page.
[ "virsh migrate --live GuestName DestinationURL", "virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running", "virsh migrate --live guest1-rhel7-64 qemu+ssh://host2.example.com/system", "virsh list Id Name State ---------------------------------- 10 guest1-rhel7-64 running", "virsh migrate --offline --persistent", "################################################################# # Processing controls # The maximum number of concurrent client connections to allow over all sockets combined. #max_clients = 5000 The maximum length of queue of connections waiting to be accepted by the daemon. Note, that some protocols supporting retransmission may obey this so that a later reattempt at connection succeeds. #max_queued_clients = 1000 The minimum limit sets the number of workers to start up initially. If the number of active clients exceeds this, then more threads are spawned, upto max_workers limit. Typically you'd want max_workers to equal maximum number of clients allowed #min_workers = 5 #max_workers = 20 The number of priority workers. If all workers from above pool will stuck, some calls marked as high priority (notably domainDestroy) can be executed in this pool. #prio_workers = 5 Total global limit on concurrent RPC calls. Should be at least as large as max_workers. Beyond this, RPC requests will be read into memory and queued. This directly impact memory usage, currently each request requires 256 KB of memory. So by default upto 5 MB of memory is used # XXX this isn't actually enforced yet, only the per-client limit is used so far #max_requests = 20 Limit on concurrent requests from a single client connection. To avoid one client monopolizing the server this should be a small fraction of the global max_requests and max_workers parameter #max_client_requests = 5 #################################################################" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_live_migration-live_kvm_migration_with_virsh
Chapter 12. Monitoring application health by using health checks
Chapter 12. Monitoring application health by using health checks In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers. 12.1. Understanding health checks A health check periodically performs diagnostics on a running container using any combination of the readiness, liveness, and startup health checks. You can include one or more probes in the specification for the pod that contains the container which you want to perform the health checks. Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Readiness probe A readiness probe determines if a container is ready to accept service requests. If the readiness probe fails for a container, the kubelet removes the pod from the list of available service endpoints. After a failure, the probe continues to examine the pod. If the pod becomes available, the kubelet adds the pod to the list of available service endpoints. Liveness health check A liveness probe determines if a container is still running. If the liveness probe fails due to a condition such as a deadlock, the kubelet kills the container. The pod then responds based on its restart policy. For example, a liveness probe on a pod with a restartPolicy of Always or OnFailure kills and restarts the container. Startup probe A startup probe indicates whether the application within a container is started. All other probes are disabled until the startup succeeds. If the startup probe does not succeed within a specified time period, the kubelet kills the container, and the container is subject to the pod restartPolicy . Some applications can require additional startup time on their first initialization. You can use a startup probe with a liveness or readiness probe to delay that probe long enough to handle lengthy start-up time using the failureThreshold and periodSeconds parameters. For example, you can add a startup probe, with a failureThreshold of 30 failures and a periodSeconds of 10 seconds (30 * 10s = 300s) for a maximum of 5 minutes, to a liveness probe. After the startup probe succeeds the first time, the liveness probe takes over. You can configure liveness, readiness, and startup probes with any of the following types of tests: HTTP GET : When using an HTTP GET test, the test determines the healthiness of the container by using a web hook. The test is successful if the HTTP response code is between 200 and 399 . You can use an HTTP GET test with applications that return HTTP status codes when completely initialized. Container Command: When using a container command test, the probe executes a command inside the container. The probe is successful if the test exits with a 0 status. TCP socket: When using a TCP socket test, the probe attempts to open a socket to the container. The container is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. You can configure several fields to control the behavior of a probe: initialDelaySeconds : The time, in seconds, after the container starts before the probe can be scheduled. The default is 0. periodSeconds : The delay, in seconds, between performing probes. The default is 10 . This value must be greater than timeoutSeconds . timeoutSeconds : The number of seconds of inactivity after which the probe times out and the container is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . successThreshold : The number of times that the probe must report success after a failure to reset the container status to successful. The value must be 1 for a liveness probe. The default is 1 . failureThreshold : The number of times that the probe is allowed to fail. The default is 3. After the specified attempts: for a liveness probe, the container is restarted for a readiness probe, the pod is marked Unready for a startup probe, the container is killed and is subject to the pod's restartPolicy Example probes The following are samples of different probes as they would appear in an object specification. Sample readiness probe with a container command readiness probe in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application # ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy # ... 1 The container name. 2 The container image to deploy. 3 A readiness probe. 4 A container command test. 5 The commands to execute on the container. Sample container command startup probe and liveness probe with container command tests in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application # ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11 # ... 1 The container name. 2 Specify the container image to deploy. 3 A liveness probe. 4 An HTTP GET test. 5 The internet scheme: HTTP or HTTPS . The default value is HTTP . 6 The port on which the container is listening. 7 A startup probe. 8 An HTTP GET test. 9 The port on which the container is listening. 10 The number of times to try the probe after a failure. 11 The number of seconds to perform the probe. Sample liveness probe with a container command test that uses a timeout in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application # ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8 # ... 1 The container name. 2 Specify the container image to deploy. 3 The liveness probe. 4 The type of probe, here a container command probe. 5 The command line to execute inside the container. 6 How often in seconds to perform the probe. 7 The number of consecutive successes needed to show success after a failure. 8 The number of times to try the probe after a failure. Sample readiness probe and liveness probe with a TCP socket test in a deployment kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: # ... template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 # ... 1 The readiness probe. 2 The liveness probe. 12.2. Configuring health checks using the CLI To configure readiness, liveness, and startup probes, add one or more probes to the specification for the pod that contains the container which you want to perform the health checks Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Procedure To add probes for a container: Create a Pod object to add one or more probes: apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19 1 Specify the container name. 2 Specify the container image to deploy. 3 Optional: Create a Liveness probe. 4 Specify a test to perform, here a TCP Socket test. 5 Specify the port on which the container is listening. 6 Specify the time, in seconds, after the container starts before the probe can be scheduled. 7 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 8 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . 9 Optional: Create a Readiness probe. 10 Specify the type of test to perform, here an HTTP test. 11 Specify a host IP address. When host is not defined, the PodIP is used. 12 Specify HTTP or HTTPS . When scheme is not defined, the HTTP scheme is used. 13 Specify the port on which the container is listening. 14 Optional: Create a Startup probe. 15 Specify the type of test to perform, here an Container Execution probe. 16 Specify the commands to execute on the container. 17 Specify the number of times to try the probe after a failure. 18 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 19 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . Note If the initialDelaySeconds value is lower than the periodSeconds value, the first Readiness probe occurs at some point between the two periods due to an issue with timers. The timeoutSeconds value must be lower than the periodSeconds value. Create the Pod object: USD oc create -f <file-name>.yaml Verify the state of the health check pod: USD oc describe pod my-application Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image "registry.k8s.io/liveness" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image "registry.k8s.io/liveness" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container The following is the output of a failed probe that restarted a container: Sample Liveness check output with unhealthy container USD oc describe pod pod1 Example output .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image "registry.k8s.io/liveness" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 244.116568ms 12.3. Monitoring application health using the Developer perspective You can use the Developer perspective to add three types of health probes to your container to ensure that your application is healthy: Use the Readiness probe to check if the container is ready to handle requests. Use the Liveness probe to check if the container is running. Use the Startup probe to check if the application within the container has started. You can add health checks either while creating and deploying an application, or after you have deployed an application. 12.4. Editing health checks using the Developer perspective You can use the Topology view to edit health checks added to your application, modify them, or add more health checks. Prerequisites You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, right-click your application and select Edit Health Checks . Alternatively, in the side panel, click the Actions drop-down list and select Edit Health Checks . In the Edit Health Checks page: To remove a previously added health probe, click the Remove icon adjoining it. To edit the parameters of an existing probe: Click the Edit Probe link to a previously added probe to see the parameters for the probe. Modify the parameters as required, and click the check mark to save your changes. To add a new health probe, in addition to existing health checks, click the add probe links. For example, to add a Liveness probe that checks if your container is running: Click Add Liveness Probe , to see a form containing the parameters for the probe. Edit the probe parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Liveness Probe Added message is displayed. Click Save to save your modifications and add the additional probes to your container. You are redirected to the Topology view. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Liveness probe - HTTP Get 10.129.4.65:8080/ has been added to the container, in addition to the earlier existing probes. 12.5. Monitoring health check failures using the Developer perspective In case an application health check fails, you can use the Topology view to monitor these health check violations. Prerequisites You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, click on the application node to see the side panel. Click the Observe tab to see the health check failures in the Events (Warning) section. Click the down arrow adjoining Events (Warning) to see the details of the health check failure. Additional resources For details on switching to the Developer perspective in the web console, see About the Developer perspective . For details on adding health checks while creating and deploying an application, see Advanced Options in the Creating applications using the Developer perspective section.
[ "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8", "kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19", "oc create -f <file-name>.yaml", "oc describe pod my-application", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container", "oc describe pod pod1", ". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/application-health
Chapter 5. Editing Virtual Machines
Chapter 5. Editing Virtual Machines 5.1. Editing Virtual Machine Properties Changes to storage, operating system, or networking parameters can adversely affect the virtual machine. Ensure that you have the correct details before attempting to make any changes. Virtual machines can be edited while running, and some changes (listed in the procedure below) will be applied immediately. To apply all other changes, the virtual machine must be shut down and restarted. Note External virtual machines (marked with the prefix external ) cannot be edited through the Red Hat Virtualization Manager. Editing Virtual Machines Click Compute Virtual Machines . Select the virtual machine to be edited. Click Edit . Change settings as required. Changes to the following settings are applied immediately: Name Description Comment Optimized for (Desktop/Server/High Performance) Delete Protection Network Interfaces Memory Size (Edit this field to hot plug virtual memory. See Hot Plugging Virtual Memory .) Virtual Sockets (Edit this field to hot plug CPUs. See CPU hot plug .) Highly Available Priority for Run/Migration queue Disable strict user checking Icon Click OK . If the Start Configuration pop-up window appears, click OK . Some changes are applied immediately. All other changes are applied when you shut down and restart your virtual machine. Until then, the pending changes icon ( ) appears as a reminder to restart the virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/chap-editing_virtual_machines
9.9. Serial Driver
9.9. Serial Driver The para-virtualized serial driver ( virtio-serial ) is a bytestream-oriented, character stream driver. The para-virtualized serial driver provides a simple communication interface between the host's user space and the guest's user space where networking is not be available or unusable.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/serial_driver
Chapter 10. Multiple regions and zones configuration for a cluster on vSphere
Chapter 10. Multiple regions and zones configuration for a cluster on vSphere As an administrator, you can specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. This configuration reduces the risk of a hardware failure or network outage causing your cluster to fail. A failure domain configuration lists parameters that create a topology. The following list states some of these parameters: computeCluster datacenter datastore networks resourcePool After you define multiple regions and zones for your OpenShift Container Platform cluster, you can create or migrate nodes to another failure domain. Important If you want to migrate pre-existing OpenShift Container Platform cluster compute nodes to a failure domain, you must define a new compute machine set for the compute node. This new machine set can scale up a compute node according to the topology of the failure domain, and scale down the pre-existing compute node. The cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to any compute node provisioned by a machine set resource. For more information, see Creating a compute machine set . 10.1. Specifying multiple regions and zones for your cluster on vSphere You can configure the infrastructures.config.openshift.io configuration resource to specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. Topology-aware features for the cloud controller manager and the vSphere Container Storage Interface (CSI) Operator Driver require information about the vSphere topology where you host your OpenShift Container Platform cluster. This topology information exists in the infrastructures.config.openshift.io configuration resource. Before you specify regions and zones for your cluster, you must ensure that all datacenters and compute clusters contain tags, so that the cloud provider can add labels to your node. For example, if datacenter-1 represents region-a and compute-cluster-1 represents zone-1 , the cloud provider adds an openshift-region category label with a value of region-a to datacenter-1 . Additionally, the cloud provider adds an openshift-zone category tag with a value of zone-1 to compute-cluster-1 . Note You can migrate control plane nodes with vMotion capabilities to a failure domain. After you add these nodes to a failure domain, the cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to these nodes. Prerequisites You created the openshift-region and openshift-zone tag categories on the vCenter server. You ensured that each datacenter and compute cluster contains tags that represent the name of their associated region or zone, or both. Optional: If you defined API and Ingress static IP addresses to the installation program, you must ensure that all regions and zones share a common layer 2 network. This configuration ensures that API and Ingress Virtual IP (VIP) addresses can interact with your cluster. Important If you do not supply tags to all datacenters and compute clusters before you create a node or migrate a node, the cloud provider cannot add the topology.kubernetes.io/zone and topology.kubernetes.io/region labels to the node. This means that services cannot route traffic to your node. Procedure Edit the infrastructures.config.openshift.io custom resource definition (CRD) of your cluster to specify multiple regions and zones in the failureDomains section of the resource by running the following command: USD oc edit infrastructures.config.openshift.io cluster Example infrastructures.config.openshift.io CRD for a instance named cluster with multiple regions and zones defined in its configuration spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: "</region_a_dc/host/zone_a_cluster>" resourcePool: "</region_a_dc/host/zone_a_cluster/Resources/resource_pool>" datastore: "</region_a_dc/datastore/datastore_a>" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {} Important After you create a failure domain and you define it in a CRD for a VMware vSphere cluster, you must not modify or delete the failure domain. Doing any of these actions with this configuration can impact the availability and fault tolerance of a control plane machine. Save the resource file to apply the changes. Additional resources Parameters for the cluster-wide infrastructure CRD 10.2. Enabling a multiple layer 2 network for your cluster You can configure your cluster to use a multiple layer 2 network configuration so that data transfer among nodes can span across multiple networks. Prerequisites You configured network connectivity among machines so that cluster components can communicate with each other. Procedure If you installed your cluster with installer-provisioned infrastructure, you must ensure that all control plane nodes share a common layer 2 network. Additionally, ensure compute nodes that are configured for Ingress pod scheduling share a common layer 2 network. If you need compute nodes to span multiple layer 2 networks, you can create infrastructure nodes that can host Ingress pods. If you need to provision workloads across additional layer 2 networks, you can create compute machine sets on vSphere and then move these workloads to your target layer 2 networks. If you installed your cluster on infrastructure that you provided, which is defined as a user-provisioned infrastructure, complete the following actions to meet your needs: Configure your API load balancer and network so that the load balancer can reach the API and Machine Config Server on the control plane nodes. Configure your Ingress load balancer and network so that the load balancer can reach the Ingress pods on the compute or infrastructure nodes. Additional resources Installing a cluster on vSphere with network customizations Creating infrastructure machine sets for production environments Creating a compute machine set 10.3. Parameters for the cluster-wide infrastructure CRD You must set values for specific parameters in the cluster-wide infrastructure, infrastructures.config.openshift.io , Custom Resource Definition (CRD) to define multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. The following table lists mandatory parameters for defining multiple regions and zones for your OpenShift Container Platform cluster: Parameter Description vcenters The vCenter server for your OpenShift Container Platform cluster. You can only specify one vCenter for your cluster. datacenters vCenter datacenters where VMs associated with the OpenShift Container Platform cluster will be created or presently exist. port The TCP port of the vCenter server. server The fully qualified domain name (FQDN) of the vCenter server. failureDomains The list of failure domains. name The name of the failure domain. region The value of the openshift-region tag assigned to the topology for the failure failure domain. zone The value of the openshift-zone tag assigned to the topology for the failure failure domain. topology The vCenter reources associated with the failure domain. datacenter The datacenter associated with the failure domain. computeCluster The full path of the compute cluster associated with the failure domain. resourcePool The full path of the resource pool associated with the failure domain. datastore The full path of the datastore associated with the failure domain. networks A list of port groups associated with the failure domain. Only one portgroup may be defined. Additional resources Specifying multiple regions and zones for your cluster on vSphere
[ "oc edit infrastructures.config.openshift.io cluster", "spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_datacenter> - <region_b_datacenter> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_vsphere/post-install-vsphere-zones-regions-configuration
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/providing-feedback-on-red-hat-documentation_release-notes
Chapter 202. Kubernetes Nodes Component
Chapter 202. Kubernetes Nodes Component Available as of Camel version 2.17 The Kubernetes Nodes component is one of Kubernetes Components which provides a producer to execute kubernetes node operations and a consumer to consume kubernetes node events. 202.1. Component Options The Kubernetes Nodes component has no options. 202.2. Endpoint Options The Kubernetes Nodes endpoint is configured using URI syntax: with the following path and query parameters: 202.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 202.2.2. Query Parameters (28 parameters): Name Description Default Type apiVersion (common) The Kubernetes API Version to use String dnsDomain (common) The dns domain, used for ServiceCall EIP String kubernetesClient (common) Default KubernetesClient to use if provided KubernetesClient portName (common) The port name, used for ServiceCall EIP String portProtocol (common) The port protocol, used for ServiceCall EIP tcp String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean labelKey (consumer) The Consumer Label key when watching at some resources String labelValue (consumer) The Consumer Label value when watching at some resources String namespace (consumer) The namespace String poolSize (consumer) The Consumer pool size 1 int resourceName (consumer) The Consumer Resource Name we would like to watch String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern operation (producer) Producer operation to do on Kubernetes String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 202.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "kubernetes-nodes:masterUrl" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-nodes-component
Chapter 1. Overview
Chapter 1. Overview AMQ Core Protocol JMS is a Java Message Service (JMS) 2.0 client for use in messaging applications that send and receive Artemis Core Protocol messages. AMQ Core Protocol JMS is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.9 Release Notes . AMQ Core Protocol JMS is based on the JMS implementation from Apache ActiveMQ Artemis . For more information about the JMS API, see the JMS API reference and the JMS tutorial . 1.1. Key features JMS 1.1 and 2.0 compatible SSL/TLS for secure communication Automatic reconnect and failover Distributed transactions (XA) Pure-Java implementation 1.2. Supported standards and protocols AMQ Core Protocol JMS supports the following industry-recognized standards and network protocols: Version 2.0 of the Java Message Service API Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Modern TCP with IPv6 1.3. Supported configurations AMQ Core Protocol JMS supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with the following JDKs: OpenJDK 8 and 11 Oracle JDK 8 IBM JDK 8 IBM AIX 7.1 with IBM JDK 8 Microsoft Windows 10 Pro with Oracle JDK 8 Microsoft Windows Server 2012 R2 and 2016 with Oracle JDK 8 Oracle Solaris 10 and 11 with Oracle JDK 8 AMQ Core Protocol JMS is supported in combination with the latest version of AMQ Broker. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description ConnectionFactory An entry point for creating connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for producing and consuming messages. It contains message producers and consumers. MessageProducer A channel for sending messages to a destination. It has a target destination. MessageConsumer A channel for receiving messages from a destination. It has a source destination. Destination A named location for messages, either a queue or a topic. Queue A stored sequence of messages. Topic A stored sequence of messages for multicast distribution. Message An application-specific piece of information. AMQ Core Protocol JMS sends and receives messages . Messages are transferred between connected peers using message producers and consumers . Producers and consumers are established over sessions . Sessions are established over connections . Connections are created by connection factories . A sending peer creates a producer to send messages. The producer has a destination that identifies a target queue or topic at the remote peer. A receiving peer creates a consumer to receive messages. Like the producer, the consumer has a destination that identifies a source queue or topic at the remote peer. A destination is either a queue or a topic . In JMS, queues and topics are client-side representations of named broker entities that hold messages. A queue implements point-to-point semantics. Each message is seen by only one consumer, and the message is removed from the queue after it is read. A topic implements publish-subscribe semantics. Each message is seen by multiple consumers, and the message remains available to other consumers after it is read. See the JMS tutorial for more information. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_core_protocol_jms_client/overview
Chapter 16. Tutorial: Updating component routes with custom domains and TLS certificates
Chapter 16. Tutorial: Updating component routes with custom domains and TLS certificates This guide demonstrates how to modify the hostname and TLS certificate of the Web console, OAuth server, and Downloads component routes in Red Hat OpenShift Service on AWS (ROSA) version 4.14 and above. [1] The changes that we make to the component routes [2] in this guide are described in greater detail in the customizing the internal OAuth server URL , console route , and download route OpenShift Container Platform documentation. 16.1. Prerequisites ROSA CLI ( rosa ) version 1.2.37 or higher AWS CLI ( aws ) A ROSA Classic cluster version 4.14 or higher Note ROSA with HCP is not supported at this time. OpenShift CLI ( oc ) jq CLI Access to the cluster as a user with the cluster-admin role. OpenSSL (for generating the demonstration SSL/TLS certificates) 16.2. Setting up your environment Log in to your cluster using an account with cluster-admin privileges. Configure an environment variable for your cluster name: USD export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}USD//') Ensure all fields output correctly before moving to the section: USD echo "Cluster: USD{CLUSTER_NAME}" Example output Cluster: my-rosa-cluster 16.3. Find the current routes Verify that you can reach the component routes on their default hostnames. You can find the hostnames by querying the lists of routes in the openshift-console and openshift-authentication projects. USD oc get routes -n openshift-console USD oc get routes -n openshift-authentication Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD console console-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more console https reencrypt/Redirect None downloads downloads-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more downloads http edge/Redirect None NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD oauth-openshift oauth-openshift.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more oauth-openshift 6443 passthrough/Redirect None From this output you can see that our base hostname is z9a9.p1.openshiftapps.com . Get the ID of the default ingress by running the following command: USD export INGRESS_ID=USD(rosa list ingress -c USD{CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id') Ensure all fields output correctly before moving to the section: USD echo "Ingress ID: USD{INGRESS_ID}" Example output Ingress ID: r3l6 By running these commands you can see that the default component routes for our cluster are: console-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com for Console downloads-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com for Downloads oauth-openshift.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com for OAuth We can use the rosa edit ingress command to change the hostname of each service and add a TLS certificate for all of our component routes. The relevant parameters are shown in this excerpt of the command line help for the rosa edit ingress command: USD rosa edit ingress -h Edit a cluster ingress for a cluster. Usage: rosa edit ingress ID [flags] [...] --component-routes string Component routes settings. Available keys [oauth, console, downloads]. For each key a pair of hostname and tlsSecretRef is expected to be supplied. Format should be a comma separate list 'oauth: hostname=example-hostname;tlsSecretRef=example-secret-ref,downloads:...' For this example, we'll use the following custom component routes: console.my-new-domain.dev for Console downloads.my-new-domain.dev for Downloads oauth.my-new-domain.dev for OAuth 16.4. Create a valid TLS certificate for each component route In this section, we create three separate self-signed certificate key pairs and then trust them to verify that we can access our new component routes using a real web browser. Warning This is for demonstration purposes only, and is not recommended as a solution for production workloads. Consult your certificate authority to understand how to create certificates with similar attributes for your production workloads. Important To prevent issues with HTTP/2 connection coalescing, you must use a separate individual certificate for each endpoint. Using a wildcard or SAN certificate is not supported. Generate a certificate for each component route, taking care to set our certificate's subject ( -subj ) to the custom domain of the component route we want to use: Example USD openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj "/CN=console.my-new-domain.dev" USD openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj "/CN=downloads.my-new-domain.dev" USD openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj "/CN=oauth.my-new-domain.dev" This generates three pairs of .pem files, key-<component>.pem and cert-<component>.pem . 16.5. Add the certificates to the cluster as secrets Create three TLS secrets in the openshift-config namespace. These become your secret reference when you update the component routes later in this guide. USD oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config USD oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config USD oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-config 16.6. Find the hostname of the load balancer in your cluster When you create a cluster, the service creates a load balancer and generates a hostname for that load balancer. We need to know the load balancer hostname in order to create DNS records for our cluster. You can find the hostname by running the oc get svc command against the openshift-ingress namespace. The hostname of the load balancer is the EXTERNAL-IP associated with the router-default service in the openshift-ingress namespace. USD oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.237.88 a234gsr3242rsfsfs-1342r624.us-east-1.elb.amazonaws.com 80:31175/TCP,443:31554/TCP 76d In our case, the hostname is a234gsr3242rsfsfs-1342r624.us-east-1.elb.amazonaws.com . Save this value for later, as we will need it to configure DNS records for our new component route hostnames. 16.7. Add component route DNS records to your hosting provider In your hosting provider, add DNS records that map the CNAME of your new component route hostnames to the load balancer hostname we found in the step. 16.8. Update the component routes and TLS secret using the ROSA CLI When your DNS records have been updated, you can use the ROSA CLI to change the component routes. Use the rosa edit ingress command to update your default ingress route with the new base domain and the secret reference associated with it, taking care to update the hostnames for each component route. USD rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls' Note You can also edit only a subset of the component routes by leaving the component routes you do not want to change set to an empty string. For example, if you only want to change the Console and OAuth server hostnames and TLS certificates, you would run the following command: USD rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname="";tlsSecretRef="", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls' Run the rosa list ingress command to verify that your changes were successfully made: USD rosa list ingress -c USD{CLUSTER_NAME} -ojson | jq ".[] | select(.id == \"USD{INGRESS_ID}\") | .component_routes" Example output { "console": { "kind": "ComponentRoute", "hostname": "console.my-new-domain.dev", "tls_secret_ref": "console-tls" }, "downloads": { "kind": "ComponentRoute", "hostname": "downloads.my-new-domain.dev", "tls_secret_ref": "downloads-tls" }, "oauth": { "kind": "ComponentRoute", "hostname": "oauth.my-new-domain.dev", "tls_secret_ref": "oauth-tls" } } Add your certificate to the truststore on your local system, then confirm that you can access your components at their new routes using your local web browser. 16.9. Reset the component routes to the default using the ROSA CLI If you want to reset the component routes to the default configuration, run the following rosa edit ingress command: USD rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname="";tlsSecretRef="",downloads: hostname="";tlsSecretRef="", oauth: hostname="";tlsSecretRef=""' [1] Modifying these routes on Red Hat OpenShift Service on AWS ROSA versions prior to 4.14 is not typically supported. However, if you have a cluster using version 4.13, you can request for Red Hat Support to enable support for this feature on your version 4.13 cluster by opening a support case . [2] We use the term "component routes" to refer to the OAuth, Console, and Downloads routes that are provided when ROSA are first installed.
[ "export CLUSTER_NAME=USD(oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\" | sed 's/-[a-z0-9]\\{5\\}USD//')", "echo \"Cluster: USD{CLUSTER_NAME}\"", "Cluster: my-rosa-cluster", "oc get routes -n openshift-console oc get routes -n openshift-authentication", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD console console-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more console https reencrypt/Redirect None downloads downloads-openshift-console.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more downloads http edge/Redirect None NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD oauth-openshift oauth-openshift.apps.my-example-cluster-aws.z9a9.p1.openshiftapps.com ... 1 more oauth-openshift 6443 passthrough/Redirect None", "export INGRESS_ID=USD(rosa list ingress -c USD{CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id')", "echo \"Ingress ID: USD{INGRESS_ID}\"", "Ingress ID: r3l6", "rosa edit ingress -h Edit a cluster ingress for a cluster. Usage: rosa edit ingress ID [flags] [...] --component-routes string Component routes settings. Available keys [oauth, console, downloads]. For each key a pair of hostname and tlsSecretRef is expected to be supplied. Format should be a comma separate list 'oauth: hostname=example-hostname;tlsSecretRef=example-secret-ref,downloads:...'", "openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj \"/CN=console.my-new-domain.dev\" openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj \"/CN=downloads.my-new-domain.dev\" openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj \"/CN=oauth.my-new-domain.dev\"", "oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-config", "oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.237.88 a234gsr3242rsfsfs-1342r624.us-east-1.elb.amazonaws.com 80:31175/TCP,443:31554/TCP 76d", "rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'", "rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=\"\";tlsSecretRef=\"\", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'", "rosa list ingress -c USD{CLUSTER_NAME} -ojson | jq \".[] | select(.id == \\\"USD{INGRESS_ID}\\\") | .component_routes\"", "{ \"console\": { \"kind\": \"ComponentRoute\", \"hostname\": \"console.my-new-domain.dev\", \"tls_secret_ref\": \"console-tls\" }, \"downloads\": { \"kind\": \"ComponentRoute\", \"hostname\": \"downloads.my-new-domain.dev\", \"tls_secret_ref\": \"downloads-tls\" }, \"oauth\": { \"kind\": \"ComponentRoute\", \"hostname\": \"oauth.my-new-domain.dev\", \"tls_secret_ref\": \"oauth-tls\" } }", "rosa edit ingress -c USD{CLUSTER_NAME} USD{INGRESS_ID} --component-routes 'console: hostname=\"\";tlsSecretRef=\"\",downloads: hostname=\"\";tlsSecretRef=\"\", oauth: hostname=\"\";tlsSecretRef=\"\"'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/cloud-experts-update-component-routes
Chapter 56. Jira
Chapter 56. Jira Both producer and consumer are supported The JIRA component interacts with the JIRA API by encapsulating Atlassian's REST Java Client for JIRA . It currently provides polling for new issues and new comments. It is also able to create new issues, add comments, change issues, add/remove watchers, add attachment and transition the state of an issue. Rather than webhooks, this endpoint relies on simple polling. Reasons include: Concern for reliability/stability The types of payloads we're polling aren't typically large (plus, paging is available in the API) The need to support apps running somewhere not publicly accessible where a webhook would fail Note that the JIRA API is fairly expansive. Therefore, this component could be easily expanded to provide additional interactions. 56.1. Dependencies When using jira with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency> 56.2. URI format The Jira type accepts the following operations: For consumers: newIssues: retrieve only new issues after the route is started newComments: retrieve only new comments after the route is started watchUpdates: retrieve only updated fields/issues based on provided jql For producers: addIssue: add an issue addComment: add a comment on a given issue attach: add an attachment on a given issue deleteIssue: delete a given issue updateIssue: update fields of a given issue transitionIssue: transition a status of a given issue watchers: add/remove watchers of a given issue As Jira is fully customizable, you must assure the fields IDs exists for the project and workflow, as they can change between different Jira servers. 56.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 56.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 56.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 56.4. Component Options The Jira component supports 12 options, which are listed below. Name Description Default Type delay (common) Time in milliseconds to elapse for the poll. 6000 Integer jiraUrl (common) Required The Jira server url, example: . String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared base jira configuration. JiraConfiguration accessToken (security) (OAuth only) The access token generated by the Jira server. String consumerKey (security) (OAuth only) The consumer key from Jira settings. String password (security) (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String privateKey (security) (OAuth only) The private key generated by the client to encrypt the conversation to the server. String username (security) (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String verificationCode (security) (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String 56.5. Endpoint Options The Jira endpoint is configured using URI syntax: with the following path and query parameters: 56.5.1. Path Parameters (1 parameters) Name Description Default Type type (common) Required Operation to perform. Consumers: NewIssues, NewComments. Producers: AddIssue, AttachFile, DeleteIssue, TransitionIssue, UpdateIssue, Watchers. See this class javadoc description for more information. Enum values: ADDCOMMENT ADDISSUE ATTACH DELETEISSUE NEWISSUES NEWCOMMENTS WATCHUPDATES UPDATEISSUE TRANSITIONISSUE WATCHERS ADDISSUELINK ADDWORKLOG FETCHISSUE FETCHCOMMENTS JiraType 56.5.2. Query Parameters (16 parameters) Name Description Default Type delay (common) Time in milliseconds to elapse for the poll. 6000 Integer jiraUrl (common) Required The Jira server url, example: . String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean jql (consumer) JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. It is important to use the RAW() and set the JQL inside it to prevent camel parsing it, example: RAW(project in (MYP, COM) AND resolution = Unresolved). String maxResults (consumer) Max number of issues to search for. 50 Integer sendOnlyUpdatedField (consumer) Indicator for sending only changed fields in exchange body or issue object. By default consumer sends only changed fields. true boolean watchedFields (consumer) Comma separated list of fields to watch for changes. Status,Priority are the defaults. Status,Priority String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean accessToken (security) (OAuth only) The access token generated by the Jira server. String consumerKey (security) (OAuth only) The consumer key from Jira settings. String password (security) (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String privateKey (security) (OAuth only) The private key generated by the client to encrypt the conversation to the server. String username (security) (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String verificationCode (security) (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String 56.6. Client Factory You can bind the JiraRestClientFactory with name JiraRestClientFactory in the registry to have it automatically set in the Jira endpoint. 56.7. Authentication Camel-jira supports Basic Authentication and OAuth 3 legged authentication . We recommend to use OAuth whenever possible, as it provides the best security for your users and system. 56.7.1. Basic authentication requirements: An username and password 56.7.2. OAuth authentication requirements: Follow the tutorial in Jira OAuth documentation to generate the client private key, consumer key, verification code and access token. a private key, generated locally on your system. A verification code, generated by Jira server. The consumer key, set in the Jira server settings. An access token, generated by Jira server. 56.8. JQL The JQL URI option is used by both consumer endpoints. Theoretically, items like "project key", etc. could be URI options themselves. However, by requiring the use of JQL, the consumers become much more flexible and powerful. At the bare minimum, the consumers will require the following: One important thing to note is that the newIssues consumer will automatically set the JQL as: append ORDER BY key desc to your JQL prepend id > latestIssueId to retrieve issues added after the camel route was started. This is in order to optimize startup processing, rather than having to index every single issue in the project. Another note is that, similarly, the newComments consumer will have to index every single issue and comment in the project. Therefore, for large projects, it's vital to optimize the JQL expression as much as possible. For example, the JIRA Toolkit Plugin includes a "Number of comments" custom field - use '"Number of comments" > 0' in your query. Also try to minimize based on state (status=Open), increase the polling delay, etc. Example: 56.9. Operations See a list of required headers to set when using the Jira operations. The author field for the producers is automatically set to the authenticated user in the Jira side. If any required field is not set, then an IllegalArgumentException is throw. There are operations that requires id for fields suchs as: issue type, priority, transition. Check the valid id on your jira project as they may differ on a jira installation and project workflow. 56.10. AddIssue Required: ProjectKey : The project key, example: CAMEL, HHH, MYP. IssueTypeId or IssueTypeName : The id of the issue type or the name of the issue type, you can see the valid list in http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY . IssueSummary : The summary of the issue. Optional: IssueAssignee : the assignee user IssuePriorityId or IssuePriorityName : The priority of the issue, you can see the valid list in http://jira_server/rest/api/2/priority . IssueComponents : A list of string with the valid component names. IssueWatchersAdd : A list of strings with the usernames to add to the watcher list. IssueDescription : The description of the issue. 56.11. AddComment Required: IssueKey : The issue key identifier. body of the exchange is the description. 56.12. Attach Only one file should attach per invocation. Required: IssueKey : The issue key identifier. body of the exchange should be of type File 56.13. DeleteIssue Required: IssueKey : The issue key identifier. 56.14. TransitionIssue Required: IssueKey : The issue key identifier. IssueTransitionId : The issue transition id . body of the exchange is the description. 56.15. UpdateIssue IssueKey : The issue key identifier. IssueTypeId or IssueTypeName : The id of the issue type or the name of the issue type, you can see the valid list in http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY . IssueSummary : The summary of the issue. IssueAssignee : the assignee user IssuePriorityId or IssuePriorityName : The priority of the issue, you can see the valid list in http://jira_server/rest/api/2/priority . IssueComponents : A list of string with the valid component names. IssueDescription : The description of the issue. 56.16. Watcher IssueKey : The issue key identifier. IssueWatchersAdd : A list of strings with the usernames to add to the watcher list. IssueWatchersRemove : A list of strings with the usernames to remove from the watcher list. 56.17. WatchUpdates (consumer) watchedFields Comma separated list of fields to watch for changes i.e Status,Priority,Assignee,Components etc. sendOnlyUpdatedField By default only changed field is send as the body. All messages also contain following headers that add additional info about the change: issueKey : Key of the updated issue changed : name of the updated field (i.e Status) watchedIssues : list of all issue keys that are watched in the time of update 56.18. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.jira.access-token (OAuth only) The access token generated by the Jira server. String camel.component.jira.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jira.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.jira.configuration To use a shared base jira configuration. The option is a org.apache.camel.component.jira.JiraConfiguration type. JiraConfiguration camel.component.jira.consumer-key (OAuth only) The consumer key from Jira settings. String camel.component.jira.delay Time in milliseconds to elapse for the poll. 6000 Integer camel.component.jira.enabled Whether to enable auto configuration of the jira component. This is enabled by default. Boolean camel.component.jira.jira-url The Jira server url, example: http://my_jira.com:8081/ . String camel.component.jira.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jira.password (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String camel.component.jira.private-key (OAuth only) The private key generated by the client to encrypt the conversation to the server. String camel.component.jira.username (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String camel.component.jira.verification-code (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency>", "jira://type[?options]", "jira:type", "jira://[type]?[required options]&jql=project=[project key]", "jira://[type]?[required options]&jql=RAW(project=[project key] AND status in (Open, \\\"Coding In Progress\\\") AND \\\"Number of comments\\\">0)\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jira-component-starter
Chapter 35. Configuring a Red Hat High Availability cluster with IBM z/VM instances as cluster members
Chapter 35. Configuring a Red Hat High Availability cluster with IBM z/VM instances as cluster members Red Hat provides several articles that may be useful when designing, configuring, and administering a Red Hat High Availability cluster running on z/VM virtual machines. Design Guidance for RHEL High Availability Clusters - IBM z/VM Instances as Cluster Members Administrative Procedures for RHEL High Availability Clusters - Configuring z/VM SMAPI Fencing with fence_zvmip for RHEL 7 or 8 IBM z Systems Cluster Members RHEL High Availability cluster nodes on IBM z Systems experience STONITH-device timeouts around midnight on a nightly basis (Red Hat Knowledgebase) Administrative Procedures for RHEL High Availability Clusters - Preparing a dasd Storage Device for Use by a Cluster of IBM z Systems Members You may also find the following articles useful when designing a Red Hat High Availability cluster in general. Support Policies for RHEL High Availability Clusters Exploring Concepts of RHEL High Availability Clusters - Fencing/STONITH
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/ref_ibmz-configuring-and-managing-high-availability-clusters
Chapter 6. Server migration changes
Chapter 6. Server migration changes Before migrating, ensure you understand the migration changes necessary for deploying applications on a server and upgrading them in Red Hat JBoss Enterprise Application Platform 8.0. 6.1. Web server configuration changes Learn about changes in mod_cluster and Undertow within Red Hat JBoss Enterprise Application Platform that impact root context behavior and enhance the security of your server information. 6.1.1. Default web module behavior changes In JBoss EAP 7.0, the root context of a web application was disabled by default in mod_cluster . As of JBoss EAP 7.1, this is no longer the case. This can have unexpected consequences if you are expecting the root context to be disabled. For example, requests can be misrouted to undesired nodes or a private application that should not be exposed can be inadvertently accessible through a public proxy. Undertow locations are also now registered with the mod_cluster load balancer automatically unless they are explicitly excluded. Use the following management CLI command to exclude ROOT from the modcluster subsystem configuration. /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value=ROOT) Use the following management CLI command to disable the default welcome web application. /subsystem=undertow/server=default-server/host=default-host/location=\/:remove /subsystem=undertow/configuration=handler/file=welcome-content:remove reload Additional resources Configure the Default Welcome Web Application 6.1.2. Undertow subsystem default configuration changes Prior to Red Hat JBoss Enterprise Application Platform 7.2, the default undertow subsystem configuration included two response header filters that were appended to each HTTP response by the default-host : Server was previously set to JBoss EAP/7 . X-Powered-By was previously set to Undertow/1 . These response header filters were removed from the default JBoss EAP 7.2 configuration to prevent inadvertent disclosure of information about the server in use. The following is an example of the default undertow subsystem configuration in JBoss EAP 7.1. <subsystem xmlns="urn:jboss:domain:undertow:4.0"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <filter-ref name="server-header"/> <filter-ref name="x-powered-by-header"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> <filters> <response-header name="server-header" header-name="Server" header-value="JBoss-EAP/7"/> <response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow/1"/> </filters> </subsystem> The following is an example of the default undertow subsystem configuration in JBoss EAP 7.4. <subsystem xmlns="urn:jboss:domain:undertow:12.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> The following is an example of the default undertow subsystem configuration in JBoss EAP 8.0. <subsystem xmlns="urn:jboss:domain:undertow:14.0" default-virtual-host="default-host" default-servlet-container="default" default-server="default-server" statistics-enabled="USD{wildfly.undertow.statistics-enabled:USD{wildfly.statistics-enabled:false}}" default-security-domain="other"> <byte-buffer-pool name="default"/> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" ssl-context="applicationSSC" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker http-authentication-factory="application-http-authentication"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> <application-security-domains> <application-security-domain name="other" security-domain="ApplicationDomain"/> </application-security-domains> </subsystem> 6.2. Infinispan server configuration changes Configure a custom stateful session bean (SFSB) cache for passivation in Red Hat JBoss Enterprise Application Platform 7.1 and later while considering the following aspects: Deprecation of the idle-timeout attribute Implementation of lazy passivation Determination of cluster name Appropriate configuration of eviction and expiration Modifications in the cache container transport protocol for enhanced performance. By adhering to these considerations, you can optimize your SFSB cache configuration for improved passivation in JBoss EAP 7.1 and beyond. 6.2.1. Configuring custom stateful session bean cache for passivation In JBoss EAP 7.1 and later versions, a custom stateful session beans (SFSB) cache with passivation enabled has changed. When configuring SFSB cache with passivation, consider the following key changes: Deprecation of the idle-timeout attribute A shift from eager to lazy passivation Determining the cluster name Configuring eviction and expiration in the Jakarta Enterprise Beans cache When configuring a custom SFSB cache for passivation in JBoss EAP 7.1 and later versions, consider the following restrictions: The idle-timeout attribute, which is configured in the infinispan passivation-store of the ejb3 subsystem, is deprecated in JBoss EAP 7.1 and later. JBoss EAP 7.1 and later only support lazy passivation, which occurs when the max-size threshold is reached. Note Eager passivation through idle-timeout is no longer supported in these versions. In JBoss EAP 7.1 and later, the cluster name used by the Jakarta Enterprise Beans client is determined by the actual cluster name of the channel, as configured in the jgroups subsystem. JBoss EAP 7.1 and later still allow you to set the max-size attribute to control the passivation threshold. 6.2.2. Infinispan cache container transport changes A behavior change between JBoss EAP 7.0 and later versions requires performing updates to the cache container transport protocol in batch mode or using a special header. This change also affects tools used for managing the JBoss EAP server. The following is an example of the management CLI commands used to configure the cache container transport protocol in JBoss EAP 7.0. /subsystem=infinispan/cache-container=my:add() /subsystem=infinispan/cache-container=my/transport=jgroups:add() /subsystem=infinispan/cache-container=my/invalidation-cache=mycache:add(mode=SYNC) The following is an example of the management CLI commands needed to perform the same configuration in JBoss EAP 7.1. Note that the commands are executed in batch mode. batch /subsystem=infinispan/cache-container=my:add() /subsystem=infinispan/cache-container=my/transport=jgroups:add() /subsystem=infinispan/cache-container=my/invalidation-cache=mycache:add(mode=SYNC) run-batch If you prefer not to use batch mode, you can instead specify the operation header allow-resource-service-restart=true when defining the transport. If you use scripts to update the cache container transport protocol, be sure to review them and add batch mode. 6.2.3. EJB subsystem configuration changes from version 8.0 and later JBoss EAP 8.0 introduces changes to the Enterprise JavaBeans (EJB) subsystem configuration for distributable stateful session beans (SFSB), including a new subsystem and updates to several resources. Several resources used in JBoss EAP 6 and 7 are also deprecated. These changes enable server configuration migration to ensure that your applications are compatible with future major releases. JBoss EAP 8.0 replaces the deprecated resources used in JBoss EAP 6 and 7 with two new resources and a distributable-ejb subsystem for configuring SFSB caching distributively. The following table outlines the deprecated resources and the new resources that replace them. Table 6.1. SFSB cache configuration changes Deprecated resources New non-distributable SFSB cache New distributable SFSB cache /subsystem=ejb3/cache /subsystem=ejb3/simple-cache /subsystem=ejb3/distributable-cache /subsystem=ejb3/passivation-store NA /subsystem=ejb3/distributable-cache="name"/bean-management"=.. Non-distributable SFSB cache, /subsystem=ejb3/simple-cache , is equivalent to the SFSB cache, /subsystem=ejb3/cache , used in JBoss EAP 7, where no passivation store was defined. Distributable SFSB cache, /subsystem=ejb3/distributable-cache , includes an optional bean-management attribute that refers to a corresponding resource from the distributable-ejb subsystem. If you do not define the resource, it defaults to the bean-management resource within the distributable-ejb subsystem. Consider migrating your server configuration to the updated approach in JBoss EAP 8.0. Although the current release continues to function with the deprecated resources, this might not be the case with future releases when they get removed. An example of a comparison between JBoss EAP 7 and preferred JBoss EAP 8.0 configurations is as follows: JBoss EAP 7 configuration: Preferred JBoss EAP 8.0 configuration: Adopting the preferred JBoss EAP 8.0 configuration ensures that your servers are compatible with the latest version and future major releases. You will also benefit from improved resources and subsystems for distributable SFSBs. 6.3. Jakarta Enterprise Beans server configuration changes While configuring the ejb3 subsystem in JBoss EAP 7, exceptions may appear in the server log during deployment of enterprise bean applications. Important If you use the JBoss Server Migration Tool to update your server configuration, ensure that the ejb3 subsystem is properly configured and no issues arise when deploying your Jakarta Enterprise Beans applications. For information about configuring and running the tool, see Using the JBoss Server Migration Tool . 6.3.1. Resolving DuplicateServiceException due to caching changes The following DuplicateServiceException error is caused by caching changes in JBoss EAP 7. DuplicateServiceException in server log ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC000001: Failed to start service jboss.deployment.unit."mdb-1.0-SNAPSHOT.jar".cache-dependencies-installer: org.jboss.msc.service.StartException in service jboss.deployment.unit."mdb-1.0-SNAPSHOT.jar".cache-dependencies-installer: Failed to start service ... Caused by: org.jboss.msc.service.DuplicateServiceException: Service jboss.infinispan.ejb."mdb-1.0-SNAPSHOT.jar".config is already registered To resolve the DuplicateServiceException caused by caching changes in JBoss EAP 7, run the following commands to reconfigure caching in the ejb3 subsystem. /subsystem=ejb3/file-passivation-store=file:remove /subsystem=ejb3/cluster-passivation-store=infinispan:remove /subsystem=ejb3/passivation-store=infinispan:add(cache-container=ejb, max-size=10000) /subsystem=ejb3/cache=passivating:remove /subsystem=ejb3/cache=clustered:remove /subsystem=ejb3/cache=distributable:add(passivation-store=infinispan, aliases=[passivating, clustered]) By reconfiguring the cache, you can resolve this error and prevent the DuplicateServiceException from occurring. 6.4. Messaging server configuration changes Learn how to migrate both your configuration and associated messaging data to ActiveMQ Artemis, which serves as the Jakarta Messaging support provider in Red Hat JBoss Enterprise Application Platform 8.0. 6.4.1. Migrate messaging data Review the approaches you can take to migrate messaging data in Red Hat JBoss Enterprise Application Platform. To migrate messaging data from a JBoss EAP 7.x release to JBoss EAP 8.0, you can Migrate messaging data by using export and import approaches . This method involves exporting messaging data from the release and importing it into JBoss EAP 8.0 using the management CLI import-journal operation. Note that this approach is specifically applicable to file-based messaging systems. As with version 7, JBoss EAP 8.0 continues to use ActiveMQ Artemis as the Jakarta Messaging support provider, which helps to make the migration process smoother. 6.4.1.1. Migrate messaging data by using export and import approaches Use the following approach to export the messaging data from a release to an XML file, and then import that file using the import-journal operation: Export messaging data from JBoss EAP 7.x. release Import the XML formatted messaging data Important You cannot use the export and import method to move messaging data between systems that use a JDBC-based journal for storage. 6.4.1.1.1. Export messaging data from JBoss EAP 7.x release To export messaging data from Red Hat JBoss Enterprise Application Platform 7.x release, follow the outlined procedure. Prerequisites JBoss EAP 7.x is installed on your system. You have access to a terminal or command line interface. You have the necessary permissions to navigate directories and execute commands. Procedure Open a terminal, navigate to the JBoss EAP 7.x install directory, and start the server in admin-only mode. USD EAP_HOME/bin/standalone.sh -c standalone-full.xml --start-mode=admin-only Open a new terminal, navigate to the JBoss EAP 7.x install directory, and connect to the management CLI. USD EAP_HOME/bin/jboss-cli.sh --connect Use the following management CLI command to export the messaging journal data. /subsystem=messaging-activemq/server=default:export-journal() Verification Make sure there are no errors or warning messages in the log at the completion of the command. Use tool compatible with your operating system to validate the XML in the generated output file. 6.4.1.1.2. Import the XML formatted messaging data After exporting messaging data from a JBoss EAP 8.0, you need to import the XML file into JBoss EAP 8.0 or later using the import-journal operation. Prerequisites Complete the migration of your JBoss EAP 8.0 by using either the management CLI migrate operation or the JBoss Server Migration Tool. Start the JBoss EAP 8.0 server in normal mode without any connected Jakarta Messaging clients. Procedure To import the XML file into JBoss EAP 8.0 or a later version, follow these steps using the import-journal operation: Important If your target server has already performed some messaging tasks, make sure to back up your messaging folders before you begin the import-journal operation to prevent data loss in the event of an import failure. For more information, see Backing up messaging folder data . Start the JBoss EAP 8.0 server in normal mode with no Jakarta Messaging clients connected. Important It is important that you start the server with no Jakarta Messaging clients connected. This is because the import-journal operation behaves like a Jakarta Messaging producer. Messages are immediately available when the operation is in progress. If this operation fails in the middle of the import and Jakarta Messaging clients are connected, there is no way to recover because Jakarta Messaging clients might have already consumed some of the messages. Open a new terminal, navigate to the JBoss EAP 8.0 install directory, and connect to the management CLI. Use the following management CLI command to import the messaging data: Important Do not run this command more than one time as doing so will result in duplicate messages. 6.4.1.1.3. Recovering from an import messaging data failure You can recover from an import messaging data failure if the import-journal operation fails. Prerequisites Familiarity with the JBoss EAP 8.0 server and its management CLI commands. Knowledge of the directory location of messaging journal folders. Prior backup of target server messaging data if available. Procedure Shut down the JBoss EAP 8.0 server. Delete all of the messaging journal folders. See Backing up messaging folder data for the management CLI commands to determine the correct directory location for the messaging journal folders. If you backed up the target server messaging data prior to the import, copy the messaging folders from the backup location to the messaging journal directory determined in the prior step. Repeat the steps to Import the XML formatted messaging data . 6.4.1.2. Migrate messaging data using a messaging bridge A Jakarta Messaging bridge consumes messages from a source Jakarta Messaging queue or topic and sends them to a target Jakarta Messaging queue or topic, located on a different server. It enables message bridging between messaging servers that adhere to the Jakarta Messaging 3.1 standards. Look up the source and destination Jakarta Messaging resources using Java Naming and Directory Interface, ensuring that the client classes for Java Naming and Directory Interface lookup are bundled in a module and declare the module name in the Jakarta Messaging bridge configuration. This section provides instructions on how to configure the servers and deploy a messaging bridge for moving messaging data from JBoss EAP 7 to JBoss EAP 8.0. To achieve this, proceed with the following steps: Configuring JBoss EAP 8.0 server Migrating the messaging data 6.4.1.2.1. Configuring JBoss EAP 8.0 server To configure the Jakarta Messaging bridge in JBoss EAP 8.0 for seamless migration of messaging data, including module dependencies and queue configuration, follow the outlined procedure. Prerequisites JBoss EAP 8.0 server installed and running. Procedure Create the following jms-queue configuration for the default server in the messaging-activemq subsystem of the JBoss EAP 8.0 server. jms-queue add --queue-address=MigratedMessagesQueue --entries=[jms/queue/MigratedMessagesQueue java:jboss/exported/jms/queue/MigratedMessagesQueue] Make sure that messaging-activemq subsystem default server contains a configuration for the InVmConnectionFactory connection-factory similar to the following: <connection-factory name="InVmConnectionFactory" factory-type="XA_GENERIC" entries="java:/ConnectionFactory" connectors="in-vm"/> If it does not contain the entry, create one using the following management CLI command: /subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:add(factory-type=XA_GENERIC, connectors=[in-vm], entries=[java:/ConnectionFactory]) Create and deploy a Jakarta Messaging bridge that reads messages from the InQueue JMS queue and transfers them to the MigratedMessagesQueue configured on the JBoss EAP 7.x server. /subsystem=messaging-activemq/jms-bridge=myBridge:add(add-messageID-in-header=true,max-batch-time=100,max-batch-size=10,max-retries=-1,failure-retry-interval=1000,quality-of-service=AT_MOST_ONCE,module=org.hornetq,source-destination=jms/queue/InQueue,source-connection-factory=jms/RemoteConnectionFactory,source-context=[("java.naming.factory.initial"=>"org.wildfly.naming.client.WildFlyInitialContextFactory"),("java.naming.provider.url"=>"http-remoting://legacy-host:8080")],target-destination=jms/queue/MigratedMessagesQueue,target-connection-factory=java:/ConnectionFactory) This creates the following jms-bridge configuration in the messaging-activemq subsystem of the JBoss EAP 8.0 server. <jms-bridge name="myBridge" add-messageID-in-header="true" max-batch-time="100" max-batch-size="10" max-retries="-1" failure-retry-interval="1000" quality-of-service="AT_MOST_ONCE"> <source destination="jms/queue/InQueue" connection-factory="jms/RemoteConnectionFactory"> <source-context> <property name="java.naming.factory.initial" value="org.wildfly.naming.client.WildFlyInitialContextFactory"/> <property name="java.naming.provider.url" value="http-remoting://legacy-host:8080"/> </source-context> </source> <target destination="jms/queue/MigratedMessagesQueue" connection-factory="java:/ConnectionFactory"/> </jms-bridge> 6.4.1.2.2. Migrating the messaging data To migrate messaging data from Red Hat JBoss Enterprise Application Platform 8.0 to Red Hat JBoss Enterprise Application Platform 8.0, follow the outlined procedure. Prerequisites JBoss EAP 8.0 server installed and running. Procedure Verify that the information you provided for the following configurations is correct. Any queue and topic names. The java.naming.provider.url for Java Naming and Directory Interface lookup. Make sure that you have deployed the target Jakarta Messaging destination to the JBoss EAP 8.0 server. Start the JBoss EAP 8.0 servers, including the JBoss EAP 7 servers involved in the migration process. 6.4.1.3. Backing up messaging folder data To ensure data integrity, it is recommended to back up the target message folders before making any changes if your server has already processed messages. You can find the default location of the messaging folders at EAP_HOME /standalone/data/activemq/ ; however, it might be configurable. If you are unsure about the location of your messaging data, you can use the following management CLI commands to determine it. Procedure Determine the location of your messaging data by using the following management CLI commands: /subsystem=messaging-activemq/server=default/path=journal-directory:resolve-path /subsystem=messaging-activemq/server=default/path=paging-directory:resolve-path /subsystem=messaging-activemq/server=default/path=bindings-directory:resolve-path /subsystem=messaging-activemq/server=default/path=large-messages-directory:resolve-path Note Ensure that you stop the server before copying the data. Copy each messaging folder to a secure backup location after you identify their respective locations. 6.4.2. Configure the Jakarta Messaging resource adapter The way you configure a generic Jakarta Messaging resource adapter for use with a third-party Jakarta Messaging provider has changed in Red Hat JBoss Enterprise Application Platform 8.0. For more information, see Deploying a generic Java Message Service resource adapter in the JBoss EAP 7.4 Configuring Messaging guide. 6.4.3. Messaging configuration changes In Red Hat JBoss Enterprise Application Platform 7.0, if you configured the replication-primary policy without specifying the check-for-live-server attribute, its default value was set to false . This has changed in JBoss EAP 7.1 and later. The default value for the check-for-live-server attribute is now set to true . The following is an example of a management CLI command that configures the replication-primary policy without specifying the check-for-live-server attribute. /subsystem=messaging-activemq/server=default/ha-policy=replication-primary:add(cluster-name=my-cluster,group-name=group1) When you read the resource using the management CLI, note that the check-for-live-server attribute value is set to true . /subsystem=messaging-activemq/server=default/ha-policy=replication-primary:read-resource(recursive=true) { "outcome" => "success", "result" => { "check-for-live-server" => true, "cluster-name" => "my-cluster", "group-name" => "group1", "initial-replication-sync-timeout" => 30000L }, "response-headers" => {"process-state" => "reload-required"} } 6.4.4. Galleon layer for embedded broker messaging In JBoss EAP 7, an embedded messaging broker was part of the default installation. In JBoss EAP 8, this functionality was added to a new Galleon layer called as embedded-activemq . This new layer is not a part of the default configuration so users who want to rely on having a broker embedded in JBoss EAP must include it explicitly in their configuration. The layer provides a messaging-activemq subsystem with an embedded broker even if it is recommended for customers to use a dedicated AMQ cluster on OpenShift. It also provisions ancillary resources, for example, socket-bindings and necessary dependencies needed to support this use case. 6.5. Security enhancements in JBoss EAP 8.0 Starting with JBoss EAP 8.0, you must use Elytron since the legacy security subsystem and legacy security realms are no longer available. You can only configure Elytron defaults by using the JBoss Server Migration Tool. Therefore, legacy security configurations must be manually migrated. Additional resources Migrating to Elytron 6.5.1. Vaults migration Vaults has been removed from JBoss EAP 8.0. Use the credential store provided by the elytron subsystem to store sensitive strings. Additional resources Migrate secure vaults and properties Credentials and credential stores in Elytron 6.5.2. Legacy security subsystem and security realms removal The legacy security subsystem and legacy security realms have been removed from JBoss EAP 8.0. Use the security realms provided by the elytron subsystem. Additional resources Migrate Authentication Configuration Migrate Database Authentication Configuration to Elytron Migrate Composite Stores to Elytron Migrate Security Domains That Use Caching to Elytron Migrating Legacy Properties-based Configuration to Elytron Migrating LDAP Authentication Configuration to Elytron Migrate Kerberos Authentication to Elytron Migrate SSL Configurations Securing applications and management interfaces using an identity store Securing applications and management interfaces using multiple identity stores 6.5.3. PicketLink subsystem removal The PicketLink subsystem has been removed from JBoss EAP 8.0. Use Red Hat build of Keycloak instead of the PicketLink identity provider, and the Red Hat build of Keycloak SAML adapter instead of the PicketLink service provider. Additional resources PicketLink removal Red Hat build of Keycloak 6.5.4. Migrate from Red Hat build of Keycloak OIDC client adapter to JBoss EAP subsystem The keycloak subsystem is not supported in JBoss EAP 8.0 and is replaced by the elytron-oidc-client subsystem. JBoss Server Migration Tool performs the migration by default. Additional resources Migrate keycloak subsystem OpenID Connect configuration in JBoss EAP 6.5.5. Custom login modules migration In JBoss EAP 8.0, the legacy security subsystem has been removed. To continue using your custom login modules with the elytron subsystem, use the new Java Authentication and Authorization Service (JAAS) security realm and jaas-realm . Additional resources JAAS realm in the elytron subsystem 6.5.6. FIPS mode changes Starting from JBoss EAP 7.1, automatic generation of a self-signed certificate is enabled by default for development purposes. If you are running in FIPS mode, configure the server to disable automatic self-signed certificate creation. Failure to do so may lead to the following error upon starting the server: ERROR [org.xnio.listener] (default I/O-6) XNIO001007: A channel event listener threw an exception: java.lang.RuntimeException: WFLYDM0114: Failed to lazily initialize SSL context ... Caused by: java.lang.RuntimeException: WFLYDM0112: Failed to generate self signed certificate ... Caused by: java.security.KeyStoreException: Cannot get key bytes, not PKCS#8 encoded Additional resources Enabling SSL/TLS for applications by using the automatically generated self-signed certificate 6.6. mod_cluster configuration changes The configuration for static proxy lists in mod_cluster has changed in Red Hat JBoss Enterprise Application Platform 7.4. Starting from JBoss EAP 7.4, the proxy-list attribute was deprecated and subsequently removed in JBoss EAP 8.0. It has been replaced by the proxies attribute, which is a list of outbound socket binding names. This change impacts how you define a static proxy list, for example, when disabling advertising for mod_cluster. For information about how to disable advertising for mod_cluster, see Disable advertising for mod_cluster in the JBoss EAP 7.4 Configuration Guide . To ensure compatibility with JBoss EAP 8.0, update user scripts and legacy user CLI script as follows: Replace the deprecated ssl=configuration with the appropriate elytron-based configuration. Update the mod_cluster configuration path from /mod-cluster-config=CONFIGURATION to /proxy=default . Update the dynamic load provider path in user scripts, replacing the deprecated path with provider=dynamic . The deprecated connector attribute, which referred to an Undertow listener, has been removed. Update your user scripts to use the listener attribute as a replacement. For more information about mod_cluster attributes, see ModCluster subsystem attributes in the JBoss EAP 7.4 Configuration Guide . 6.7. Viewing configuration changes With Red Hat JBoss Enterprise Application Platform 7, you can track the configuration changes that were made to a running server. You can also view the history of configuration changes made by authorized users. Whereas with JBoss EAP 7.0, you had to use the core-service management CLI command to configure options and to retrieve a list of recent configuration changes. Example: List configuration changes in JBoss EAP 7.0 JBoss EAP 7.1 introduced a new core-management subsystem that can be configured to track configuration changes made to the running server. This is the preferred method of configuring and viewing configuration changes in JBoss EAP 7.1 and later. Example: List configuration changes in JBoss EAP 7.1 and later For more information about using the new core-management subsystem introduced in JBoss EAP 7.1, see View configuration changes in the JBoss EAP 7.4 Configuration Guide .
[ "/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value=ROOT)", "/subsystem=undertow/server=default-server/host=default-host/location=\\/:remove /subsystem=undertow/configuration=handler/file=welcome-content:remove reload", "<subsystem xmlns=\"urn:jboss:domain:undertow:4.0\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <filter-ref name=\"server-header\"/> <filter-ref name=\"x-powered-by-header\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> <filters> <response-header name=\"server-header\" header-name=\"Server\" header-value=\"JBoss-EAP/7\"/> <response-header name=\"x-powered-by-header\" header-name=\"X-Powered-By\" header-value=\"Undertow/1\"/> </filters> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:12.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:14.0\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-server=\"default-server\" statistics-enabled=\"USD{wildfly.undertow.statistics-enabled:USD{wildfly.statistics-enabled:false}}\" default-security-domain=\"other\"> <byte-buffer-pool name=\"default\"/> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" ssl-context=\"applicationSSC\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker http-authentication-factory=\"application-http-authentication\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> <application-security-domains> <application-security-domain name=\"other\" security-domain=\"ApplicationDomain\"/> </application-security-domains> </subsystem>", "/subsystem=infinispan/cache-container=my:add() /subsystem=infinispan/cache-container=my/transport=jgroups:add() /subsystem=infinispan/cache-container=my/invalidation-cache=mycache:add(mode=SYNC)", "batch /subsystem=infinispan/cache-container=my:add() /subsystem=infinispan/cache-container=my/transport=jgroups:add() /subsystem=infinispan/cache-container=my/invalidation-cache=mycache:add(mode=SYNC) run-batch", "/subsystem=ejb3/cache=example-simple-cache:add() /subsystem=ejb3/passivation-store=infinispan:add(cache-container=ejb, bean-cache=default, max-size=1024) /subsystem=ejb3/cache=example-distributed-cache:add(passivation-store=infinispan)", "/subsystem=ejb3/simple-cache=example-simple-cache:add() /subsystem=distributable-ejb=example-distributed-cache/infinispan-bean-management=example-bean-cache:add(cache-container=ejb, cache=default, max-active-beans=1024) /subsystem=ejb3/distributable-cache=example-distributed-cache:add(bean-management=example-bean-cache)", "ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC000001: Failed to start service jboss.deployment.unit.\"mdb-1.0-SNAPSHOT.jar\".cache-dependencies-installer: org.jboss.msc.service.StartException in service jboss.deployment.unit.\"mdb-1.0-SNAPSHOT.jar\".cache-dependencies-installer: Failed to start service Caused by: org.jboss.msc.service.DuplicateServiceException: Service jboss.infinispan.ejb.\"mdb-1.0-SNAPSHOT.jar\".config is already registered", "/subsystem=ejb3/file-passivation-store=file:remove /subsystem=ejb3/cluster-passivation-store=infinispan:remove /subsystem=ejb3/passivation-store=infinispan:add(cache-container=ejb, max-size=10000) /subsystem=ejb3/cache=passivating:remove /subsystem=ejb3/cache=clustered:remove /subsystem=ejb3/cache=distributable:add(passivation-store=infinispan, aliases=[passivating, clustered])", "EAP_HOME/bin/standalone.sh -c standalone-full.xml --start-mode=admin-only", "EAP_HOME/bin/jboss-cli.sh --connect", "/subsystem=messaging-activemq/server=default:export-journal()", "EAP_HOME /bin/jboss-cli.sh --connect", "/subsystem=messaging-activemq/server=default:import-journal(file= OUTPUT_DIRECTORY /OldMessagingData.xml)", "jms-queue add --queue-address=MigratedMessagesQueue --entries=[jms/queue/MigratedMessagesQueue java:jboss/exported/jms/queue/MigratedMessagesQueue]", "<connection-factory name=\"InVmConnectionFactory\" factory-type=\"XA_GENERIC\" entries=\"java:/ConnectionFactory\" connectors=\"in-vm\"/>", "/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:add(factory-type=XA_GENERIC, connectors=[in-vm], entries=[java:/ConnectionFactory])", "/subsystem=messaging-activemq/jms-bridge=myBridge:add(add-messageID-in-header=true,max-batch-time=100,max-batch-size=10,max-retries=-1,failure-retry-interval=1000,quality-of-service=AT_MOST_ONCE,module=org.hornetq,source-destination=jms/queue/InQueue,source-connection-factory=jms/RemoteConnectionFactory,source-context=[(\"java.naming.factory.initial\"=>\"org.wildfly.naming.client.WildFlyInitialContextFactory\"),(\"java.naming.provider.url\"=>\"http-remoting://legacy-host:8080\")],target-destination=jms/queue/MigratedMessagesQueue,target-connection-factory=java:/ConnectionFactory)", "<jms-bridge name=\"myBridge\" add-messageID-in-header=\"true\" max-batch-time=\"100\" max-batch-size=\"10\" max-retries=\"-1\" failure-retry-interval=\"1000\" quality-of-service=\"AT_MOST_ONCE\"> <source destination=\"jms/queue/InQueue\" connection-factory=\"jms/RemoteConnectionFactory\"> <source-context> <property name=\"java.naming.factory.initial\" value=\"org.wildfly.naming.client.WildFlyInitialContextFactory\"/> <property name=\"java.naming.provider.url\" value=\"http-remoting://legacy-host:8080\"/> </source-context> </source> <target destination=\"jms/queue/MigratedMessagesQueue\" connection-factory=\"java:/ConnectionFactory\"/> </jms-bridge>", "/subsystem=messaging-activemq/server=default/path=journal-directory:resolve-path /subsystem=messaging-activemq/server=default/path=paging-directory:resolve-path /subsystem=messaging-activemq/server=default/path=bindings-directory:resolve-path /subsystem=messaging-activemq/server=default/path=large-messages-directory:resolve-path", "/subsystem=messaging-activemq/server=default/ha-policy=replication-primary:add(cluster-name=my-cluster,group-name=group1)", "/subsystem=messaging-activemq/server=default/ha-policy=replication-primary:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"check-for-live-server\" => true, \"cluster-name\" => \"my-cluster\", \"group-name\" => \"group1\", \"initial-replication-sync-timeout\" => 30000L }, \"response-headers\" => {\"process-state\" => \"reload-required\"} }", "ERROR [org.xnio.listener] (default I/O-6) XNIO001007: A channel event listener threw an exception: java.lang.RuntimeException: WFLYDM0114: Failed to lazily initialize SSL context Caused by: java.lang.RuntimeException: WFLYDM0112: Failed to generate self signed certificate Caused by: java.security.KeyStoreException: Cannot get key bytes, not PKCS#8 encoded", "/core-service=management/service=configuration-changes:add(max-history=10) /core-service=management/service=configuration-changes:list-changes", "/subsystem=core-management/service=configuration-changes:add(max-history=20) /subsystem=core-management/service=configuration-changes:list-changes" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/server-migration_default
probe::udp.sendmsg.return
probe::udp.sendmsg.return Name probe::udp.sendmsg.return - Fires whenever an attempt to send a UDP message is completed Synopsis udp.sendmsg.return Values size Number of bytes sent by the process name The name of this probe Context The process which sent a UDP message
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-udp-sendmsg-return
A.3. ss
A.3. ss ss is a command-line utility that prints statistical information about sockets, allowing administrators to assess device performance over time. By default, ss lists open non-listening TCP sockets that have established connections, but a number of useful options are provided to help administrators filter out statistics about specific sockets. One commonly used command is ss -tmpie , which displays all TCP sockets ( t , internal TCP information ( i ), socket memory usage ( m ), processes using the socket ( p ), and detailed socket information ( i ). Red Hat recommends ss over netstat in Red Hat Enterprise Linux 7. ss is provided by the iproute package. For more information, see the man page:
[ "man ss" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-ss
Chapter 129. KafkaMirrorMakerConsumerSpec schema reference
Chapter 129. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 129.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 129.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 129.3. config Use the consumer.config properties to configure Kafka options for the consumer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Properties with the following prefixes cannot be set: bootstrap.servers group.id interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 129.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 129.5. KafkaMirrorMakerConsumerSpec schema properties Property Property type Description numStreams integer Specifies the number of consumer stream threads to create. offsetCommitInterval integer Specifies the offset auto-commit interval in ms. Default value is 60000. bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. groupId string A unique string that identifies the consumer group this consumer belongs to. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. tls ClientTls TLS configuration for connecting MirrorMaker to the cluster. config map The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerConsumerSpec-reference
5.4.14. Growing Logical Volumes
5.4.14. Growing Logical Volumes To increase the size of a logical volume, use the lvextend command. When you extend the logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it. The following command extends the logical volume /dev/myvg/homevol to 12 gigabytes. The following command adds another gigabyte to the logical volume /dev/myvg/homevol . As with the lvcreate command, you can use the -l argument of the lvextend command to specify the number of extents by which to increase the size of the logical volume. You can also use this argument to specify a percentage of the volume group, or a percentage of the remaining free space in the volume group. The following command extends the logical volume called testlv to fill all of the unallocated space in the volume group myvg . After you have extended the logical volume it is necessary to increase the file system size to match. By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you do not need to worry about specifying the same size for each of the two commands. 5.4.14.1. Extending a Striped Volume In order to increase the size of a striped logical volume, there must be enough free space on the underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For example, consider a volume group vg that consists of two underlying physical volumes, as displayed with the following vgs command. You can create a stripe using the entire amount of space in the volume group. Note that the volume group now has no more free space. The following command adds another physical volume to the volume group, which then has 135G of additional space. At this point you cannot extend the striped logical volume to the full size of the volume group, because two underlying devices are needed in order to stripe the data. To extend the striped logical volume, add another physical volume and then extend the logical volume. In this example, having added two physical volumes to the volume group we can extend the logical volume to the full size of the volume group. If you do not have enough underlying physical devices to extend the striped logical volume, it is possible to extend the volume anyway if it does not matter that the extension is not striped, which may result in uneven performance. When adding space to the logical volume, the default operation is to use the same striping parameters of the last segment of the existing logical volume, but you can override those parameters. The following example extends the existing striped logical volume to use the remaining free space after the initial lvextend command fails.
[ "lvextend -L12G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 12 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended", "lvextend -L+1G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 13 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended", "lvextend -l +100%FREE /dev/myvg/testlv Extending logical volume testlv to 68.59 GB Logical volume testlv successfully resized", "vgs VG #PV #LV #SN Attr VSize VFree vg 2 0 0 wz--n- 271.31G 271.31G", "lvcreate -n stripe1 -L 271.31G -i 2 vg Using default stripesize 64.00 KB Rounding up size to full physical extent 271.31 GB Logical volume \"stripe1\" created lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe1 vg -wi-a- 271.31G /dev/sda1(0),/dev/sdb1(0)", "vgs VG #PV #LV #SN Attr VSize VFree vg 2 1 0 wz--n- 271.31G 0", "vgextend vg /dev/sdc1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 3 1 0 wz--n- 406.97G 135.66G", "lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required", "vgextend vg /dev/sdd1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 4 1 0 wz--n- 542.62G 271.31G lvextend vg/stripe1 -L 542G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 542.00 GB Logical volume stripe1 successfully resized", "lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required lvextend -i1 -l+100%FREE vg/stripe1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_extend
Chapter 4. Accessing the overcloud
Chapter 4. Accessing the overcloud The director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. The director saves this file ( overcloudrc ) in your stack user's home directory. Run the following command to use this file: This loads the necessary environment variables to interact with your overcloud from the undercloud CLI. To return to interacting with the undercloud, run the following command:
[ "source ~/overcloudrc", "source ~/stackrc" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_cluster/accessing_the_overcloud
Chapter 1. 3scale API Management operator-based upgrade guide: from 2.14 to 2.15
Chapter 1. 3scale API Management operator-based upgrade guide: from 2.14 to 2.15 Upgrade Red Hat 3scale API Management from version 2.14 to 2.15, in an operator-based installation to manage 3scale on OpenShift 4.x. To automatically obtain a micro-release of 3scale, make sure automatic updates is on. Do not set automatic updates if you are using an Oracle external database. To check this, see Configuring automated application of micro releases . Important In order to understand the required conditions and procedure, read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, make sure to have a maintenance window. 1.1. Prerequisites to perform the upgrade Important To resolve certificate verification failures with the 3scale operator, add the annotation to skip certificate verification to the affected Custom Resource (CR). This annotation can be applied to a CR during creation or added to an existing CR. Once applied, the errors are reconciled. This section describes the required configurations to upgrade 3scale from 2.14 to 2.15 in an operator-based installation. An OpenShift Container Platform (OCP) 4.12, 4.13, 4.14, 4.15, 4.16, or 4.17 cluster with administrator access. Ensure that your OCP environment is upgraded to at least version 4.12, which is the minimal requirement for proceeding with a 3scale update. 3scale 2.14 previously deployed via the 3scale operator. Make sure the latest CSV of the threescale-2.14 channel is in use. To check it: If the approval setting for the subscription is automatic , you should already be in the latest CSV version of the channel. If the approval setting for the subscription is manual , make sure you approve all pending InstallPlans and have the latest CSV version. Keep in mind if there is a pending install plan, there might be more pending install plans, which will only be shown after the existing pending plan has been installed. 1.1.1. 3scale API Management 2.15 pre-flight checks Important If the databases are not upgraded, the 3scale instance will not be upgraded to 2.15. You can upgrade your databases with or without the 3scale 2.15 operator running. If the operator is running, it checks database versions every 10 minutes and will automatically trigger the upgrade process. If the operator was not running during the upgrade, scale it back up. You must do this to verify the requirements and continue with the installation. Before installing the 3scale 2.15 via the operator, ensure your database components meet the required minimum versions. This pre-flight check is critical to avoid breaking your 3scale instance during the upgrade. 1.1.1.1. Components and minimum version requirements Note The Oracle Database is not checked. The system database with Oracle is not checked. Zync with external databases is not checked. Ensure the following components are at or above the specified versions: System-app component: MySQL: 8.0.0 PostgreSQL: 10.0.0 Backend component: Redis: 6.2 (two instances required) Version verification Verify MySQL version: USD mysql --version Verify PostgreSQL version: USD psql --version Verify Redis version: USD redis-server --version 1.1.1.2. Upgrading databases not meeting requirements If your database versions do not meet the minimum requirements, follow these steps: Install the 3scale 2.15 operator : The 2.15 operator is installed regardless of the database versions. Upgrade databases: Upgrade MySQL, PostgreSQL, or Redis to meet the minimum required versions. Note: Follow the official documentation for the upgrade procedures of each database. Resume 2.15 upgrade: Once the databases are upgraded, the 3scale 2.15 operator detects the new versions. The upgrade process for 3scale 2.15 will then proceed automatically. By following these pre-flight checks and ensuring your database components are up-to-date, you can transition to 3scale 2.15. 1.2. Upgrading from 2.14 to 2.15 in an operator-based installation To upgrade 3scale from version 2.14 to 2.15 in an operator-based deployment: Log in to the OCP console using the account with administrator privileges. Select the project where the 3scale-operator has been deployed. Click Operators > Installed Operators . Select Red Hat Integration - 3scale > Subscription > Channel . Edit the channel of the subscription by selecting threescale-2.15 and save the changes. This will start the upgrade process. Query the pods' status on the project until you see all the new versions are running and ready without errors: USD oc get pods -n <3scale_namespace> Note The pods might have temporary errors during the upgrade process. The time required to upgrade pods can vary from 5-10 minutes. After new pod versions are running, confirm a successful upgrade by logging in to the 3scale Admin Portal and checking that it works as expected. Check the status of the APIManager objects and get the YAML content by running the following command. <myapimanager> represents the name of your APIManager : USD oc get apimanager <myapimanager> -n <3scale_namespace> -o yaml The new annotations with the values should be as follows: After you have performed all steps, the 3scale upgrade from 2.14 to 2.15 in an operator-based deployment is complete. 1.3. Upgrading from 2.14 to 2.15 in an operator-based installation with an external Oracle database Follow this procedure to update your 3scale operator-based installation with an external Oracle database. Procedure Follow these steps in Installing Red Hat 3scale API Management guide to create a new system-oracle-3scale-2.14.0-1 image. Follow the steps in link:h:Link3scaleMigrating3scale: Upgrading from 2.14 to 2.15 in an operator-based installation to upgrade the 3scale operator. Once the upgrade is completed, update the APIManager custom resource with the new image created in the first step of this procedure as described in Installing 3scale API Management with Oracle using the operator .
[ "mysql --version", "psql --version", "redis-server --version", "oc get pods -n <3scale_namespace>", "oc get apimanager <myapimanager> -n <3scale_namespace> -o yaml", "apps.3scale.net/apimanager-threescale-version: \"2.15\" apps.3scale.net/threescale-operator-version: \"0.12.x\"" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/migrating_red_hat_3scale_api_management/upgrade-operator
Managing and allocating storage resources
Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.16 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/index
Jenkins
Jenkins OpenShift Container Platform 4.14 Jenkins Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/jenkins/index
15.3. Multi-Supplier Replication
15.3. Multi-Supplier Replication In a multi-supplier replication scenario, the supplier copies of the directory data are stored on multiple read-write replicas. Each of these servers maintains a changelog for the read-write replica. Directory Server supports up to 20 suppliers in a replication topology. Note Each supplier in a multi-supplier replication environment is also a consumer automatically. The following diagram shows a multi-supplier replication environment with two suppliers: Figure 15.2. Multi-supplier Replication with Two Suppliers In complex environments, replication topologies often contain multiple read-write suppliers as well as read-only consumers. The following diagram shows a topology where each supplier is configured with ten replication agreements to replicate data to two other suppliers and eight consumers: Figure 15.3. Complex Replication Scenario with Four Suppliers and Eight Consumers Note The replication speed depends on: The speed of the network. The number of outgoing and incoming replication agreements. Use the command line or web console to set up a multi-supplier replication topology. See: Section 15.3.1, "Setting up Multi-supplier Replication Using the Command Line" Section 15.3.2, "Setting up Multi-supplier Replication Using the Web Console" 15.3.1. Setting up Multi-supplier Replication Using the Command Line The following example assumes that you have an existing Directory Server instance running on a host named supplier1.example.com . The following procedures describe how to add another read-write replica named supplier2.example.com to the topology, and how to configure multi-supplier replication for the dc=example,dc=com suffix. Preparing the New Server to Join On the supplier2.example.com host: Install Directory Server, and create an instance. For details, see the Red Hat Directory Server Installation Guide . In case you created the instance without a database, create the database for the suffix. For example, to create a database named userRoot for the dc=example,dc=com suffix: For details on creating a database for a suffix, see Section 2.1.1, "Creating Suffixes" . Enable replication for the suffix, and create the replication manager account: This command configures the supplier2.example.com host as a supplier for the dc=example,dc=com suffix, and sets the replica ID for this entry to 1 . Additionally, the server creates the cn=replication manager,cn=config user with the specified password, and allows this account to replicate changes for the suffix to this host. Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Configuring the Existing Server as a Supplier On the supplier1.example.com host: Similarly to the command you ran on the new server to join, enable replication for the dc=example,dc=com suffix, and create the replication manager account: The replica ID must be different than the one created in the section called "Preparing the New Server to Join" , but the replication manager account can use the same DN. Add the replication agreement, and initialize a new server. For example: This command creates a replication agreement named example-agreement-supplier1-to-supplier2 . The replication agreement defines settings, such as the consumer's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to the consumer. After the agreement was created, Directory Server initializes the consumer. To initialize the consumer later, omit the --init option. Note that replication does not start before you initialize the consumer. For details on initializing a consumer, see Section 15.8.3, "Initializing a Consumer" . For further details about the options used in the command, enter: Verify whether the initialization was successful: Depending on the amount of data to replicate, the initialization can take be time-consuming. Configuring the New Server as a Supplier On the supplier2.example.com host: Warning Do not continue if you have not initialized the suffix 'dc=example,dc=com' on the existing server as described in the section called "Configuring the Existing Server as a Supplier" . Otherwise, the empty database from the new server overrides the database on the existing supplier. Add the replication agreement to replicate information from supplier 2 to supplier 1 . For example: This command creates a replication agreement named example-agreement-supplier2-to-supplier1 . The replication agreement defines settings, such as the consumer's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to the consumer. 15.3.2. Setting up Multi-supplier Replication Using the Web Console The following example assumes that you have an existing Directory Server instance running on a host named supplier1.example.com . The following procedures describe how to add another read-write replica named supplier2.example.com to the topology, and how to configure multi-supplier replication for the dc=example,dc=com suffix. Preparing the New Server to Join On the supplier2.example.com host: Install Directory Server, and create an instance. For details, see the Red Hat Directory Server Installation Guide . Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. In case you created the instance without a database, create the database from the suffix. For details about creating a database for a suffix, see Section 2.1.1, "Creating Suffixes" . Enable replication for the suffix: Open the Replication menu. Select the dc=example,dc=com suffix, and click Enable Replication . Select Supplier in the Replication Role field, enter a replica ID, as well as the DN and password of the replication manager account to create. For example: These settings configure the supplier2.example.com host as a supplier for the dc=example,dc=com suffix, and set the replica ID for this entry to 1 . Additionally, the server creates the cn=replication manager,cn=config user with the specified password, and allows this account to replicate changes for the suffix to this host. Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Click Enable Replication . Configuring the Existing Server as a Supplier On the supplier1.example.com host: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Similarly to the settings on the new server to join, enable replication for the dc=example,dc=com suffix, and create a replication manager account: Open the Replication menu. Select the dc=example,dc=com suffix, and click Enable Replication . Select Supplier in the Replication Role field, enter a replica ID, as well as the DN and password of the replication manager account to create. For example: The replica ID must be different than the one created in the section called "Preparing the New Server to Join" , but the replication manager account can use the same DN. Click Enable Replication . Add the replication agreement and initialize the consumer: Open the Replication menu, and select the Agreements entry. Click Create Replication Agreement , and fill the fields. For example: These settings create a replication agreement named example-agreement-supplier1-to-supplier2 . The replication agreement defines settings, such as the consumer's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to the consumer. Select Do Online Initialization in the Consumer Initialization field to automatically initialize the consumer after saving the agreement. To initialize the consumer later, select Do Not Initialize . Note that replication does not start before you initialize the consumer. For details on initializing a consumer, see Section 15.8.3, "Initializing a Consumer" . Click Save Agreement . Verify whether the initialization was successful: Open the Replication menu. Select the Agreements entry. For a successfully-completed initialization, the web console displays the Error (0) Replica acquired successfully: Incremental update succeeded message in the Last Update Status column. Depending on the amount of data to replicate, the initialization can be time-consuming. Configuring the New Server as a Supplier On the supplier2.example.com host: Warning Do not continue if you have not initialized the replication agreement on the existing server as described in the section called "Configuring the Existing Server as a Supplier" . Otherwise, the empty database from the new server overrides the database on the existing supplier. Add the replication agreement, and initialize the consumer: Open the Replication menu, and select the Agreements entry. Click Create Replication Agreement , and fill the fields. For example: These settings create a replication agreement named example-agreement-supplier2-to-supplier1 . Select Do Online Initialization in the Consumer Initialization field to automatically initialize the consumer after saving the agreement. To initialize the consumer later, select Do Not Initialize . Note that replication does not start before you initialize the consumer. For details on initializing a consumer, see Section 15.8.3, "Initializing a Consumer" . Click Save Agreement . Verify whether the initialization was successful: Open the Replication menu. Select the Agreements entry. If the initialization completed successfully, the web console displays the Error (0) Replica acquired successfully: Incremental update succeeded message in the Last Update Status column. Depending on the amount of data to replicate, the initialization be time-consuming. 15.3.3. Preventing Monopolization of a Consumer in Multi-Supplier Replication One of the features of multi-supplier replication is that a supplier acquires exclusive access to the consumer for the replicated area. During this time, other suppliers are locked out of direct contact with the consumer. If a supplier attempts to acquire access while locked out, the consumer sends back a busy response, and the supplier sleeps for several seconds before making another attempt. During a low update load, the supplier sends its update to another consumer while the first consumer is locked, and then sends updates when the first consumer is free again. A problem can arise if the locking supplier is under a heavy update load or has a lot of pending updates in the changelog. If the locking supplier finishes sending updates and has multiple pending changes to send, it immediately attempts to reacquire the consumer. Such attempt in most cases succeeds, because other suppliers are usually sleeping. This can cause a single supplier to monopolize a consumer for several hours or longer. The following attributes address this issue: nsds5ReplicaBusyWaitTime Sets the time in seconds for a supplier to wait after a consumer sends back a busy response before making another attempt to acquire access. For example, to configure that a supplier waits 5 seconds before making another acquire attempt: nsds5ReplicaSessionPauseTime Sets the time in seconds for a supplier to wait between two update sessions. If you set a value lower or equal than the value specified in nsds5ReplicaBusyWaitTime , Directory Server automatically uses the value for the nsds5ReplicaSessionPauseTime parameter that is one second higher than the value set in nsds5ReplicaBusyWaitTime . For example, to configure that the supplier waits 10 seconds between two update sessions: nsds5ReplicaReleaseTimeout Sets the timeout after which a supplier releases the replica, whether or not it has finished sending its updates. This prevents a single supplier from monopolizing a replica. For example, to configure a supplier to release a replica after 90 seconds in a heavy replication environment: For further details, see the parameter descriptions in the Red Hat Directory Server Configuration, Command, and File Reference . To log replica busy errors, enable Replication error logging (log level 8192 ). See Section 21.3.7, "Configuring the Log Levels" .
[ "dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com backend create --suffix=\"dc=example,dc=com\" --be-name=\"userRoot\"", "dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com replication enable --suffix=\"dc=example,dc=com\" --role=\"supplier\" --replica-id=1 --bind-dn=\"cn=replication manager,cn=config\" --bind-passwd=\" password \"", "dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com replication enable --suffix=\"dc=example,dc=com\" --role=\"supplier\" --replica-id=2 --bind-dn=\"cn=replication manager,cn=config\" --bind-passwd=\" password \"", "dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt create --suffix=\"dc=example,dc=com\" --host=\"supplier2.example.com\" --port= 636 --conn-protocol= LDAPS --bind-dn=\"cn=replication manager,cn=config\" --bind-passwd=\" password \" --bind-method= SIMPLE --init example-agreement-supplier1-to-supplier2", "dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt --help", "dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com repl-agmt init-status --suffix=\" dc=example,dc=com \" example-agreement-supplier1-to-supplier2 Agreement successfully initialized.", "dsconf -D \"cn=Directory Manager\" ldap://supplier2.example.com repl-agmt create --suffix=\"dc=example,dc=com\" --host=\"supplier1.example.com\" --port= 636 --conn-protocol= LDAPS --bind-dn=\"cn=replication manager,cn=config\" --bind-passwd=\" password \" --bind-method= SIMPLE example-agreement-supplier2-to-supplier1", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix=\" suffix \" --busy-wait-time=5 agreement_name", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix=\" suffix \" --session-pause-time=10 agreement_name", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication set --suffix=\" suffix \" --repl-release-timeout=90" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/multi-supplier_replication
Chapter 89. Example decisions in Red Hat Decision Manager for an IDE
Chapter 89. Example decisions in Red Hat Decision Manager for an IDE Red Hat Decision Manager provides example decisions distributed as Java classes that you can import into your integrated development environment (IDE). You can use these examples to better understand decision engine capabilities or use them as a reference for the decisions that you define in your own Red Hat Decision Manager projects. The following example decision sets are some of the examples available in Red Hat Decision Manager: Hello World example : Demonstrates basic rule execution and use of debug output State example : Demonstrates forward chaining and conflict resolution through rule salience and agenda groups Fibonacci example : Demonstrates recursion and conflict resolution through rule salience Banking example : Demonstrates pattern matching, basic sorting, and calculation Pet Store example : Demonstrates rule agenda groups, global variables, callbacks, and GUI integration Sudoku example : Demonstrates complex pattern matching, problem solving, callbacks, and GUI integration House of Doom example : Demonstrates backward chaining and recursion Note For optimization examples provided with Red Hat build of OptaPlanner, see Getting started with Red Hat build of OptaPlanner . 89.1. Importing and executing Red Hat Decision Manager example decisions in an IDE You can import Red Hat Decision Manager example decisions into your integrated development environment (IDE) and execute them to explore how the rules and code function. You can use these examples to better understand decision engine capabilities or use them as a reference for the decisions that you define in your own Red Hat Decision Manager projects. Prerequisites Java 8 or later is installed. Maven 3.5.x or later is installed. An IDE is installed, such as Red Hat CodeReady Studio. Procedure Download and unzip the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal to a temporary directory, such as /rhpam-7.13.5-sources . Open your IDE and select File Import Maven Existing Maven Projects , or the equivalent option for importing a Maven project. Click Browse , navigate to ~/rhpam-7.13.5-sources/src/drools-USDVERSION/drools-examples (or, for the Conway's Game of Life example, ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/droolsjbpm-integration-examples ), and import the project. Navigate to the example package that you want to run and find the Java class with the main method. Right-click the Java class and select Run As Java Application to run the example. To run all examples through a basic user interface, run the DroolsExamplesApp.java class (or, for Conway's Game of Life, the DroolsJbpmIntegrationExamplesApp.java class) in the org.drools.examples main class. Figure 89.1. Interface for all examples in drools-examples (DroolsExamplesApp.java) Figure 89.2. Interface for all examples in droolsjbpm-integration-examples (DroolsJbpmIntegrationExamplesApp.java) 89.2. Hello World example decisions (basic rules and debugging) The Hello World example decision set demonstrates how to insert objects into the decision engine working memory, how to match the objects using rules, and how to configure logging to trace the internal activity of the decision engine. The following is an overview of the Hello World example: Name : helloworld Main class : org.drools.examples.helloworld.HelloWorldExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.helloworld.HelloWorld.drl (in src/main/resources ) Objective : Demonstrates basic rule execution and use of debug output In the Hello World example, a KIE session is generated to enable rule execution. All rules require a KIE session for execution. KIE session for rule execution KieServices ks = KieServices.Factory.get(); 1 KieContainer kc = ks.getKieClasspathContainer(); 2 KieSession ksession = kc.newKieSession("HelloWorldKS"); 3 1 Obtains the KieServices factory. This is the main interface that applications use to interact with the decision engine. 2 Creates a KieContainer from the project class path. This detects a /META-INF/kmodule.xml file from which it configures and instantiates a KieContainer with a KieModule . 3 Creates a KieSession based on the "HelloWorldKS" KIE session configuration defined in the /META-INF/kmodule.xml file. Note For more information about Red Hat Decision Manager project packaging, see Packaging and deploying an Red Hat Decision Manager project . Red Hat Decision Manager has an event model that exposes internal engine activity. Two default debug listeners, DebugAgendaEventListener and DebugRuleRuntimeEventListener , print debug event information to the System.err output. The KieRuntimeLogger provides execution auditing, the result of which you can view in a graphical viewer. Debug listeners and audit loggers // Set up listeners. ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. KieRuntimeLogger logger = KieServices.get().getLoggers().newFileLogger( ksession, "./target/helloworld" ); // Set up a ThreadedFileLogger so that the audit view reflects events while debugging. KieRuntimeLogger logger = ks.getLoggers().newThreadedFileLogger( ksession, "./target/helloworld", 1000 ); The logger is a specialized implementation built on the Agenda and RuleRuntime listeners. When the decision engine has finished executing, logger.close() is called. The example creates a single Message object with the message "Hello World" , inserts the status HELLO into the KieSession , executes rules with fireAllRules() . Data insertion and execution // Insert facts into the KIE session. final Message message = new Message(); message.setMessage( "Hello World" ); message.setStatus( Message.HELLO ); ksession.insert( message ); // Fire the rules. ksession.fireAllRules(); Rule execution uses a data model to pass data as inputs and outputs to the KieSession . The data model in this example has two fields: the message , which is a String , and the status , which can be HELLO or GOODBYE . Data model class public static class Message { public static final int HELLO = 0; public static final int GOODBYE = 1; private String message; private int status; ... } The two rules are located in the file src/main/resources/org/drools/examples/helloworld/HelloWorld.drl . The when condition of the "Hello World" rule states that the rule is activated for each Message object inserted into the KIE session that has the status Message.HELLO . Additionally, two variable bindings are created: the variable message is bound to the message attribute and the variable m is bound to the matched Message object itself. The then action of the rule specifies to print the content of the bound variable message to System.out , and then changes the values of the message and status attributes of the Message object bound to m . The rule uses the modify statement to apply a block of assignments in one statement and to notify the decision engine of the changes at the end of the block. "Hello World" rule The "Good Bye" rule is similar to the "Hello World" rule except that it matches Message objects that have the status Message.GOODBYE . "Good Bye" rule To execute the example, run the org.drools.examples.helloworld.HelloWorldExample class as a Java application in your IDE. The rule writes to System.out , the debug listener writes to System.err , and the audit logger creates a log file in target/helloworld.log . System.out output in the IDE console System.err output in the IDE console To better understand the execution flow of this example, you can load the audit log file from target/helloworld.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit view shows that the object is inserted, which creates an activation for the "Hello World" rule. The activation is then executed, which updates the Message object and causes the "Good Bye" rule to activate. Finally, the "Good Bye" rule is executed. When you select an event in the Audit View , the origin event, which is the "Activation created" event in this example, is highlighted in green. Figure 89.3. Hello World example Audit View 89.3. State example decisions (forward chaining and conflict resolution) The State example decision set demonstrates how the decision engine uses forward chaining and any changes to facts in the working memory to resolve execution conflicts for rules in a sequence. The example focuses on resolving conflicts through salience values or through agenda groups that you can define in rules. The following is an overview of the State example: Name : state Main classes : org.drools.examples.state.StateExampleUsingSalience , org.drools.examples.state.StateExampleUsingAgendaGroup (in src/main/java ) Module : drools-examples Type : Java application Rule files : org.drools.examples.state.*.drl (in src/main/resources ) Objective : Demonstrates forward chaining and conflict resolution through rule salience and agenda groups A forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. In contrast, a backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. The decision engine in Red Hat Decision Manager uses both forward and backward chaining to evaluate rules. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 89.4. Rule evaluation logic using forward and backward chaining In the State example, each State class has fields for its name and its current state (see the class org.drools.examples.state.State ). The following states are the two possible states for each object: NOTRUN FINISHED State class public class State { public static final int NOTRUN = 0; public static final int FINISHED = 1; private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); private String name; private int state; ... setters and getters go here... } The State example contains two versions of the same example to resolve rule execution conflicts: A StateExampleUsingSalience version that resolves conflicts by using rule salience A StateExampleUsingAgendaGroups version that resolves conflicts by using rule agenda groups Both versions of the state example involve four State objects: A , B , C , and D . Initially, their states are set to NOTRUN , which is the default value for the constructor that the example uses. State example using salience The StateExampleUsingSalience version of the State example uses salience values in rules to resolve rule execution conflicts. Rules with a higher salience value are given higher priority when ordered in the activation queue. The example inserts each State instance into the KIE session and then calls fireAllRules() . Salience State example execution final State a = new State( "A" ); final State b = new State( "B" ); final State c = new State( "C" ); final State d = new State( "D" ); ksession.insert( a ); ksession.insert( b ); ksession.insert( c ); ksession.insert( d ); ksession.fireAllRules(); // Dispose KIE session if stateful (not required if stateless). ksession.dispose(); To execute the example, run the org.drools.examples.state.StateExampleUsingSalience class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Salience State example output in the IDE console Four rules are present. First, the "Bootstrap" rule fires, setting A to state FINISHED , which then causes B to change its state to FINISHED . Objects C and D are both dependent on B , causing a conflict that is resolved by the salience values. To better understand the execution flow of this example, you can load the audit log file from target/state.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows that the assertion of the object A in the state NOTRUN activates the "Bootstrap" rule, while the assertions of the other objects have no immediate effect. Figure 89.5. Salience State example Audit View Rule "Bootstrap" in salience State example The execution of the "Bootstrap" rule changes the state of A to FINISHED , which activates rule "A to B" . Rule "A to B" in salience State example The execution of rule "A to B" changes the state of B to FINISHED , which activates both rules "B to C" and "B to D" , placing their activations onto the decision engine agenda. Rules "B to C" and "B to D" in salience State example From this point on, both rules may fire and, therefore, the rules are in conflict. The conflict resolution strategy enables the decision engine agenda to decide which rule to fire. Rule "B to C" has the higher salience value ( 10 versus the default salience value of 0 ), so it fires first, modifying object C to state FINISHED . The Audit View in your IDE shows the modification of the State object in the rule "A to B" , which results in two activations being in conflict. You can also use the Agenda View in your IDE to investigate the state of the decision engine agenda. In this example, the Agenda View shows the breakpoint in the rule "A to B" and the state of the agenda with the two conflicting rules. Rule "B to D" fires last, modifying object D to state FINISHED . Figure 89.6. Salience State example Agenda View State example using agenda groups The StateExampleUsingAgendaGroups version of the State example uses agenda groups in rules to resolve rule execution conflicts. Agenda groups enable you to partition the decision engine agenda to provide more execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. Initially, a working memory has its focus on the agenda group MAIN . Rules in an agenda group only fire when the group receives the focus. You can set the focus either by using the method setFocus() or the rule attribute auto-focus . The auto-focus attribute enables the rule to be given a focus automatically for its agenda group when the rule is matched and activated. In this example, the auto-focus attribute enables rule "B to C" to fire before "B to D" . Rule "B to C" in agenda group State example The rule "B to C" calls setFocus() on the agenda group "B to D" , enabling its active rules to fire, which then enables the rule "B to D" to fire. Rule "B to D" in agenda group State example To execute the example, run the org.drools.examples.state.StateExampleUsingAgendaGroups class as a Java application in your IDE. After the execution, the following output appears in the IDE console window (same as the salience version of the State example): Agenda group State example output in the IDE console Dynamic facts in the State example Another notable concept in this State example is the use of dynamic facts , based on objects that implement a PropertyChangeListener object. In order for the decision engine to see and react to changes of fact properties, the application must notify the decision engine that changes occurred. You can configure this communication explicitly in the rules by using the modify statement, or implicitly by specifying that the facts implement the PropertyChangeSupport interface as defined by the JavaBeans specification. This example demonstrates how to use the PropertyChangeSupport interface to avoid the need for explicit modify statements in the rules. To make use of this interface, ensure that your facts implement PropertyChangeSupport in the same way that the class org.drools.example.State implements it, and then use the following code in the DRL rule file to configure the decision engine to listen for property changes on those facts: Declaring a dynamic fact When you use PropertyChangeListener objects, each setter must implement additional code for the notification. For example, the following setter for state is in the class org.drools.examples : Setter example with PropertyChangeSupport public void setState(final int newState) { int oldState = this.state; this.state = newState; this.changes.firePropertyChange( "state", oldState, newState ); } 89.4. Fibonacci example decisions (recursion and conflict resolution) The Fibonacci example decision set demonstrates how the decision engine uses recursion to resolve execution conflicts for rules in a sequence. The example focuses on resolving conflicts through salience values that you can define in rules. The following is an overview of the Fibonacci example: Name : fibonacci Main class : org.drools.examples.fibonacci.FibonacciExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.fibonacci.Fibonacci.drl (in src/main/resources ) Objective : Demonstrates recursion and conflict resolution through rule salience The Fibonacci Numbers form a sequence starting with 0 and 1. The Fibonacci number is obtained by adding the two preceding Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, and so on. The Fibonacci example uses the single fact class Fibonacci with the following two fields: sequence value The sequence field indicates the position of the object in the Fibonacci number sequence. The value field shows the value of that Fibonacci object for that sequence position, where -1 indicates a value that still needs to be computed. Fibonacci class public static class Fibonacci { private int sequence; private long value; public Fibonacci( final int sequence ) { this.sequence = sequence; this.value = -1; } ... setters and getters go here... } To execute the example, run the org.drools.examples.fibonacci.FibonacciExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Fibonacci example output in the IDE console To achieve this behavior in Java, the example inserts a single Fibonacci object with a sequence field of 50 . The example then uses a recursive rule to insert the other 49 Fibonacci objects. Instead of implementing the PropertyChangeSupport interface to use dynamic facts, this example uses the MVEL dialect modify keyword to enable a block setter action and notify the decision engine of changes. Fibonacci example execution ksession.insert( new Fibonacci( 50 ) ); ksession.fireAllRules(); This example uses the following three rules: "Recurse" "Bootstrap" "Calculate" The rule "Recurse" matches each asserted Fibonacci object with a value of -1 , creating and asserting a new Fibonacci object with a sequence of one less than the currently matched object. Each time a Fibonacci object is added while the one with a sequence field equal to 1 does not exist, the rule re-matches and fires again. The not conditional element is used to stop the rule matching once you have all 50 Fibonacci objects in memory. The rule also has a salience value because you need to have all 50 Fibonacci objects asserted before you execute the "Bootstrap" rule. Rule "Recurse" To better understand the execution flow of this example, you can load the audit log file from target/fibonacci.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows the original assertion of the Fibonacci object with a sequence field of 50 , done from Java code. From there on, the Audit View shows the continual recursion of the rule, where each asserted Fibonacci object causes the "Recurse" rule to become activated and to fire again. Figure 89.7. Rule "Recurse" in Audit View When a Fibonacci object with a sequence field of 2 is asserted, the "Bootstrap" rule is matched and activated along with the "Recurse" rule. Notice the multiple restrictions on field sequence that test for equality with 1 or 2 : Rule "Bootstrap" You can also use the Agenda View in your IDE to investigate the state of the decision engine agenda. The "Bootstrap" rule does not fire yet because the "Recurse" rule has a higher salience value. Figure 89.8. Rules "Recurse" and "Bootstrap" in Agenda View 1 When a Fibonacci object with a sequence of 1 is asserted, the "Bootstrap" rule is matched again, causing two activations for this rule. The "Recurse" rule does not match and activate because the not conditional element stops the rule matching as soon as a Fibonacci object with a sequence of 1 exists. Figure 89.9. Rules "Recurse" and "Bootstrap" in Agenda View 2 The "Bootstrap" rule sets the objects with a sequence of 1 and 2 to a value of 1 . Now that you have two Fibonacci objects with values not equal to -1 , the "Calculate" rule is able to match. At this point in the example, nearly 50 Fibonacci objects exist in the working memory. You need to select a suitable triple to calculate each of their values in turn. If you use three Fibonacci patterns in a rule without field constraints to confine the possible cross products, the result would be 50x49x48 possible combinations, leading to about 125,000 possible rule firings, most of them incorrect. The "Calculate" rule uses field constraints to evaluate the three Fibonacci patterns in the correct order. This technique is called cross-product matching . The first pattern finds any Fibonacci object with a value != -1 and binds both the pattern and the field. The second Fibonacci object does the same thing, but adds an additional field constraint to ensure that its sequence is greater by one than the Fibonacci object bound to f1 . When this rule fires for the first time, you know that only sequences 1 and 2 have values of 1 , and the two constraints ensure that f1 references sequence 1 and that f2 references sequence 2 . The final pattern finds the Fibonacci object with a value equal to -1 and with a sequence one greater than f2 . At this point in the example, three Fibonacci objects are correctly selected from the available cross products, and you can calculate the value for the third Fibonacci object that is bound to f3 . Rule "Calculate" The modify statement updates the value of the Fibonacci object bound to f3 . This means that you now have another new Fibonacci object with a value not equal to -1 , which allows the "Calculate" rule to re-match and calculate the Fibonacci number. The debug view or Audit View of your IDE shows how the firing of the last "Bootstrap" rule modifies the Fibonacci object, enabling the "Calculate" rule to match, which then modifies another Fibonacci object that enables the "Calculate" rule to match again. This process continues until the value is set for all Fibonacci objects. Figure 89.10. Rules in Audit View 89.5. Pricing example decisions (decision tables) The Pricing example decision set demonstrates how to use a spreadsheet decision table for calculating the retail cost of an insurance policy in tabular format instead of directly in a DRL file. The following is an overview of the Pricing example: Name : decisiontable Main class : org.drools.examples.decisiontable.PricingRuleDTExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.decisiontable.ExamplePolicyPricing.xls (in src/main/resources ) Objective : Demonstrates use of spreadsheet decision tables to define rules Spreadsheet decision tables are XLS or XLSX spreadsheets that contain business rules defined in a tabular format. You can include spreadsheet decision tables with standalone Red Hat Decision Manager projects or upload them to projects in Business Central. Each row in a decision table is a rule, and each column is a condition, an action, or another rule attribute. After you create and upload your decision tables into your Red Hat Decision Manager project, the rules you defined are compiled into Drools Rule Language (DRL) rules as with all other rule assets. The purpose of the Pricing example is to provide a set of business rules to calculate the base price and a discount for a car driver applying for a specific type of insurance policy. The driver's age and history and the policy type all contribute to calculate the basic premium, and additional rules calculate potential discounts for which the driver might be eligible. To execute the example, run the org.drools.examples.decisiontable.PricingRuleDTExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: The code to execute the example follows the typical execution pattern: the rules are loaded, the facts are inserted, and a stateless KIE session is created. The difference in this example is that the rules are defined in an ExamplePolicyPricing.xls file instead of a DRL file or other source. The spreadsheet file is loaded into the decision engine using templates and DRL rules. Spreadsheet decision table setup The ExamplePolicyPricing.xls spreadsheet contains two decision tables in the first tab: Base pricing rules Promotional discount rules As the example spreadsheet demonstrates, you can use only the first tab of a spreadsheet to create decision tables, but multiple tables can be within a single tab. Decision tables do not necessarily follow top-down logic, but are more of a means to capture data resulting in rules. The evaluation of the rules is not necessarily in the given order, because all of the normal mechanics of the decision engine still apply. This is why you can have multiple decision tables in the same tab of a spreadsheet. The decision tables are executed through the corresponding rule template files BasePricing.drt and PromotionalPricing.drt . These template files reference the decision tables through their template parameter and directly reference the various headers for the conditions and actions in the decision tables. BasePricing.drt rule template file PromotionalPricing.drt rule template file The rules are executed through the kmodule.xml reference of the KIE Session DTableWithTemplateKB , which specifically mentions the ExamplePolicyPricing.xls spreadsheet and is required for successful execution of the rules. This execution method enables you to execute the rules as a standalone unit (as in this example) or to include the rules in a packaged knowledge JAR (KJAR) file, so that the spreadsheet is packaged along with the rules for execution. The following section of the kmodule.xml file is required for the execution of the rules and spreadsheet to work successfully: <kbase name="DecisionTableKB" packages="org.drools.examples.decisiontable"> <ksession name="DecisionTableKS" type="stateless"/> </kbase> <kbase name="DTableWithTemplateKB" packages="org.drools.examples.decisiontable-template"> <ruleTemplate dtable="org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls" template="org/drools/examples/decisiontable-template/BasePricing.drt" row="3" col="3"/> <ruleTemplate dtable="org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls" template="org/drools/examples/decisiontable-template/PromotionalPricing.drt" row="18" col="3"/> <ksession name="DTableWithTemplateKS"/> </kbase> As an alternative to executing the decision tables using rule template files, you can use the DecisionTableConfiguration object and specify an input spreadsheet as the input type, such as DecisionTableInputType.xls : DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); Resource xlsRes = ResourceFactory.newClassPathResource( "ExamplePolicyPricing.xls", getClass() ); kbuilder.add( xlsRes, ResourceType.DTABLE, dtableconfiguration ); The Pricing example uses two fact types: Driver Policy . The example sets the default values for both facts in their respective Java classes Driver.java and Policy.java . The Driver is 30 years old, has had no prior claims, and currently has a risk profile of LOW . The Policy that the driver is applying for is COMPREHENSIVE . In any decision table, each row is considered a different rule and each column is a condition or an action. Each row is evaluated in a decision table unless the agenda is cleared upon execution. Decision table spreadsheets (XLS or XLSX) require two key areas that define rule data: A RuleSet area A RuleTable area The RuleSet area of the spreadsheet defines elements that you want to apply globally to all rules in the same package (not only the spreadsheet), such as a rule set name or universal rule attributes. The RuleTable area defines the actual rules (rows) and the conditions, actions, and other rule attributes (columns) that constitute that rule table within the specified rule set. A decision table spreadsheet can contain multiple RuleTable areas, but only one RuleSet area. Figure 89.11. Decision table configuration The RuleTable area also defines the objects to which the rule attributes apply, in this case Driver and Policy , followed by constraints on the objects. For example, the Driver object constraint that defines the Age Bracket column is age >= USD1, age <= USD2 , where the comma-separated range is defined in the table column values, such as 18,24 . Base pricing rules The Base pricing rules decision table in the Pricing example evaluates the age, risk profile, number of claims, and policy type of the driver and produces the base price of the policy based on these conditions. Figure 89.12. Base price calculation The Driver attributes are defined in the following table columns: Age Bracket : The age bracket has a definition for the condition age >=USD1, age <=USD2 , which defines the condition boundaries for the driver's age. This condition column highlights the use of USD1 and USD2 , which is comma delimited in the spreadsheet. You can write these values as 18,24 or 18, 24 and both formats work in the execution of the business rules. Location risk profile : The risk profile is a string that the example program passes always as LOW but can be changed to reflect MED or HIGH . Number of prior claims : The number of claims is defined as an integer that the condition column must exactly equal to trigger the action. The value is not a range, only exact matches. The Policy of the decision table is used in both the conditions and the actions of the rule and has attributes defined in the following table columns: Policy type applying for : The policy type is a condition that is passed as a string that defines the type of coverage: COMPREHENSIVE , FIRE_THEFT , or THIRD_PARTY . Base USD AUD : The basePrice is defined as an ACTION that sets the price through the constraint policy.setBasePrice(USDparam); based on the spreadsheet cells corresponding to this value. When you execute the corresponding DRL rule for this decision table, the then portion of the rule executes this action statement on the true conditions matching the facts and sets the base price to the corresponding value. Record Reason : When the rule successfully executes, this action generates an output message to the System.out console reflecting which rule fired. This is later captured in the application and printed. The example also uses the first column on the left to categorize rules. This column is for annotation only and has no affect on rule execution. Promotional discount rules The Promotional discount rules decision table in the Pricing example evaluates the age, number of prior claims, and policy type of the driver to generate a potential discount on the price of the insurance policy. Figure 89.13. Discount calculation This decision table contains the conditions for the discount for which the driver might be eligible. Similar to the base price calculation, this table evaluates the Age , Number of prior claims of the driver, and the Policy type applying for to determine a Discount % rate to be applied. For example, if the driver is 30 years old, has no prior claims, and is applying for a COMPREHENSIVE policy, the driver is given a discount of 20 percent. 89.6. Pet Store example decisions (agenda groups, global variables, callbacks, and GUI integration) The Pet Store example decision set demonstrates how to use agenda groups and global variables in rules and how to integrate Red Hat Decision Manager rules with a graphical user interface (GUI), in this case a Swing-based desktop application. The example also demonstrates how to use callbacks to interact with a running decision engine to update the GUI based on changes in the working memory at run time. The following is an overview of the Pet Store example: Name : petstore Main class : org.drools.examples.petstore.PetStoreExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.petstore.PetStore.drl (in src/main/resources ) Objective : Demonstrates rule agenda groups, global variables, callbacks, and GUI integration In the Pet Store example, the sample PetStoreExample.java class defines the following principal classes (in addition to several classes to handle Swing events): Petstore contains the main() method. PetStoreUI is responsible for creating and displaying the Swing-based GUI. This class contains several smaller classes, mainly for responding to various GUI events, such as user mouse clicks. TableModel holds the table data. This class is essentially a JavaBean that extends the Swing class AbstractTableModel . CheckoutCallback enables the GUI to interact with the rules. Ordershow keeps the items that you want to buy. Purchase stores details of the order and the products that you are buying. Product is a JavaBean containing details of the product available for purchase and its price. Much of the Java code in this example is either plain JavaBean or Swing based. For more information about Swing components, see the Java tutorial on Creating a GUI with JFC/Swing . Rule execution behavior in the Pet Store example Unlike other example decision sets where the facts are asserted and fired immediately, the Pet Store example does not execute the rules until more facts are gathered based on user interaction. The example executes rules through a PetStoreUI object, created by a constructor, that accepts the Vector object stock for collecting the products. The example then uses an instance of the CheckoutCallback class containing the rule base that was previously loaded. Pet Store KIE container and fact execution setup // KieServices is the factory for all KIE services. KieServices ks = KieServices.Factory.get(); // Create a KIE container on the class path. KieContainer kc = ks.getKieClasspathContainer(); // Create the stock. Vector<Product> stock = new Vector<Product>(); stock.add( new Product( "Gold Fish", 5 ) ); stock.add( new Product( "Fish Tank", 25 ) ); stock.add( new Product( "Fish Food", 2 ) ); // A callback is responsible for populating the working memory and for firing all rules. PetStoreUI ui = new PetStoreUI( stock, new CheckoutCallback( kc ) ); ui.createAndShowGUI(); The Java code that fires the rules is in the CheckoutCallBack.checkout() method. This method is triggered when the user clicks Checkout in the UI. Rule execution from CheckoutCallBack.checkout() public String checkout(JFrame frame, List<Product> items) { Order order = new Order(); // Iterate through list and add to cart. for ( Product p: items ) { order.addItem( new Purchase( order, p ) ); } // Add the JFrame to the ApplicationData to allow for user interaction. // From the KIE container, a KIE session is created based on // its definition and configuration in the META-INF/kmodule.xml file. KieSession ksession = kcontainer.newKieSession("PetStoreKS"); ksession.setGlobal( "frame", frame ); ksession.setGlobal( "textArea", this.output ); ksession.insert( new Product( "Gold Fish", 5 ) ); ksession.insert( new Product( "Fish Tank", 25 ) ); ksession.insert( new Product( "Fish Food", 2 ) ); ksession.insert( new Product( "Fish Food Sample", 0 ) ); ksession.insert( order ); // Execute rules. ksession.fireAllRules(); // Return the state of the cart return order.toString(); } The example code passes two elements into the CheckoutCallBack.checkout() method. One element is the handle for the JFrame Swing component surrounding the output text frame, found at the bottom of the GUI. The second element is a list of order items, which comes from the TableModel that stores the information from the Table area at the upper-right section of the GUI. The for loop transforms the list of order items coming from the GUI into the Order JavaBean, also contained in the file PetStoreExample.java . In this case, the rule is firing in a stateless KIE session because all of the data is stored in Swing components and is not executed until the user clicks Checkout in the UI. Each time the user clicks Checkout , the content of the list is moved from the Swing TableModel into the KIE session working memory and is then executed with the ksession.fireAllRules() method. Within this code, there are nine calls to KieSession . The first of these creates a new KieSession from the KieContainer (the example passed in this KieContainer from the CheckoutCallBack class in the main() method). The two calls pass in the two objects that hold the global variables in the rules: the Swing text area and the Swing frame used for writing messages. More inserts put information on products into the KieSession , as well as the order list. The final call is the standard fireAllRules() . Pet Store rule file imports, global variables, and Java functions The PetStore.drl file contains the standard package and import statements to make various Java classes available to the rules. The rule file also includes global variables to be used within the rules, defined as frame and textArea . The global variables hold references to the Swing components JFrame and JTextArea components that were previously passed on by the Java code that called the setGlobal() method. Unlike standard variables in rules, which expire as soon as the rule has fired, global variables retain their value for the lifetime of the KIE session. This means the contents of these global variables are available for evaluation on all subsequent rules. PetStore.drl package, imports, and global variables package org.drools.examples; import org.kie.api.runtime.KieRuntime; import org.drools.examples.petstore.PetStoreExample.Order; import org.drools.examples.petstore.PetStoreExample.Purchase; import org.drools.examples.petstore.PetStoreExample.Product; import java.util.ArrayList; import javax.swing.JOptionPane; import javax.swing.JFrame; global JFrame frame global javax.swing.JTextArea textArea The PetStore.drl file also contains two functions that the rules in the file use: PetStore.drl Java functions function void doCheckout(JFrame frame, KieRuntime krt) { Object[] options = {"Yes", "No"}; int n = JOptionPane.showOptionDialog(frame, "Would you like to checkout?", "", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); if (n == 0) { krt.getAgenda().getAgendaGroup( "checkout" ).setFocus(); } } function boolean requireTank(JFrame frame, KieRuntime krt, Order order, Product fishTank, int total) { Object[] options = {"Yes", "No"}; int n = JOptionPane.showOptionDialog(frame, "Would you like to buy a tank for your " + total + " fish?", "Purchase Suggestion", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); System.out.print( "SUGGESTION: Would you like to buy a tank for your " + total + " fish? - " ); if (n == 0) { Purchase purchase = new Purchase( order, fishTank ); krt.insert( purchase ); order.addItem( purchase ); System.out.println( "Yes" ); } else { System.out.println( "No" ); } return true; } The two functions perform the following actions: doCheckout() displays a dialog that asks the user if she or he wants to check out. If the user does, the focus is set to the checkout agenda group, enabling rules in that group to (potentially) fire. requireTank() displays a dialog that asks the user if she or he wants to buy a fish tank. If the user does, a new fish tank Product is added to the order list in the working memory. Note For this example, all rules and functions are within the same rule file for efficiency. In a production environment, you typically separate the rules and functions in different files or build a static Java method and import the files using the import function, such as import function my.package.name.hello . Pet Store rules with agenda groups Most of the rules in the Pet Store example use agenda groups to control rule execution. Agenda groups allow you to partition the decision engine agenda to provide more execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. Initially, a working memory has its focus on the agenda group MAIN . Rules in an agenda group only fire when the group receives the focus. You can set the focus either by using the method setFocus() or the rule attribute auto-focus . The auto-focus attribute enables the rule to be given a focus automatically for its agenda group when the rule is matched and activated. The Pet Store example uses the following agenda groups for rules: "init" "evaluate" "show items" "checkout" For example, the sample rule "Explode Cart" uses the "init" agenda group to ensure that it has the option to fire and insert shopping cart items into the KIE session working memory: Rule "Explode Cart" This rule matches against all orders that do not yet have their grossTotal calculated. The execution loops for each purchase item in that order. The rule uses the following features related to its agenda group: agenda-group "init" defines the name of the agenda group. In this case, only one rule is in the group. However, neither the Java code nor a rule consequence sets the focus to this group, and therefore it relies on the auto-focus attribute for its chance to fire. auto-focus true ensures that this rule, while being the only rule in the agenda group, gets a chance to fire when fireAllRules() is called from the Java code. kcontext... .setFocus() sets the focus to the "show items" and "evaluate" agenda groups, enabling their rules to fire. In practice, you loop through all items in the order, insert them into memory, and then fire the other rules after each insertion. The "show items" agenda group contains only one rule, "Show Items" . For each purchase in the order currently in the KIE session working memory, the rule logs details to the text area at the bottom of the GUI, based on the textArea variable defined in the rule file. Rule "Show Items" The "evaluate" agenda group also gains focus from the "Explode Cart" rule. This agenda group contains two rules, "Free Fish Food Sample" and "Suggest Tank" , which are executed in that order. Rule "Free Fish Food Sample" The rule "Free Fish Food Sample" fires only if all of the following conditions are true: 1 The agenda group "evaluate" is being evaluated in the rules execution. 2 User does not already have fish food. 3 User does not already have a free fish food sample. 4 User has a goldfish in the order. If the order facts meet all of these requirements, then a new product is created (Fish Food Sample) and is added to the order in working memory. Rule "Suggest Tank" The rule "Suggest Tank" fires only if the following conditions are true: 1 User does not have a fish tank in the order. 2 User has more than five fish in the order. When the rule fires, it calls the requireTank() function defined in the rule file. This function displays a dialog that asks the user if she or he wants to buy a fish tank. If the user does, a new fish tank Product is added to the order list in the working memory. When the rule calls the requireTank() function, the rule passes the frame global variable so that the function has a handle for the Swing GUI. The "do checkout" rule in the Pet Store example has no agenda group and no when conditions, so the rule is always executed and considered part of the default MAIN agenda group. Rule "do checkout" When the rule fires, it calls the doCheckout() function defined in the rule file. This function displays a dialog that asks the user if she or he wants to check out. If the user does, the focus is set to the checkout agenda group, enabling rules in that group to (potentially) fire. When the rule calls the doCheckout() function, the rule passes the frame global variable so that the function has a handle for the Swing GUI. Note This example also demonstrates a troubleshooting technique if results are not executing as you expect: You can remove the conditions from the when statement of a rule and test the action in the then statement to verify that the action is performed correctly. The "checkout" agenda group contains three rules for processing the order checkout and applying any discounts: "Gross Total" , "Apply 5% Discount" , and "Apply 10% Discount" . Rules "Gross Total", "Apply 5% Discount", and "Apply 10% Discount" If the user has not already calculated the gross total, the Gross Total accumulates the product prices into a total, puts this total into the KIE session, and displays it through the Swing JTextArea using the textArea global variable. If the gross total is between 10 and 20 (currency units), the "Apply 5% Discount" rule calculates the discounted total, adds it to the KIE session, and displays it in the text area. If the gross total is not less than 20 , the "Apply 10% Discount" rule calculates the discounted total, adds it to the KIE session, and displays it in the text area. Pet Store example execution Similar to other Red Hat Decision Manager decision examples, you execute the Pet Store example by running the org.drools.examples.petstore.PetStoreExample class as a Java application in your IDE. When you execute the Pet Store example, the Pet Store Demo GUI window appears. This window displays a list of available products (upper left), an empty list of selected products (upper right), Checkout and Reset buttons (middle), and an empty system messages area (bottom). Figure 89.14. Pet Store example GUI after launch The following events occurred in this example to establish this execution behavior: The main() method has run and loaded the rule base but has not yet fired the rules. So far, this is the only code in connection with rules that has been run. A new PetStoreUI object has been created and given a handle for the rule base, for later use. Various Swing components have performed their functions, and the initial UI screen is displayed and waits for user input. You can click various products from the list to explore the UI setup: Figure 89.15. Explore the Pet Store example GUI No rules code has been fired yet. The UI uses Swing code to detect user mouse clicks and add selected products to the TableModel object for display in the upper-right corner of the UI. This example illustrates the Model-View-Controller design pattern. When you click Checkout , the rules are then fired in the following way: Method CheckOutCallBack.checkout() is called (eventually) by the Swing class waiting for a user to click Checkout . This inserts the data from the TableModel object (upper-right corner of the UI) into the KIE session working memory. The method then fires the rules. The "Explode Cart" rule is the first to fire, with the auto-focus attribute set to true . The rule loops through all of the products in the cart, ensures that the products are in the working memory, and then gives the "show Items" and "evaluate" agenda groups the option to fire. The rules in these groups add the contents of the cart to the text area (bottom of the UI), evaluate if you are eligible for free fish food, and determine whether to ask if you want to buy a fish tank. Figure 89.16. Fish tank qualification The "do checkout" rule is the to fire because no other agenda group currently has focus and because it is part of the default MAIN agenda group. This rule always calls the doCheckout() function, which asks you if you want to check out. The doCheckout() function sets the focus to the "checkout" agenda group, giving the rules in that group the option to fire. The rules in the "checkout" agenda group display the contents of the cart and apply the appropriate discount. Swing then waits for user input to either select more products (and cause the rules to fire again) or to close the UI. Figure 89.17. Pet Store example GUI after all rules have fired You can add more System.out calls to demonstrate this flow of events in your IDE console: System.out output in the IDE console 89.7. Honest Politician example decisions (truth maintenance and salience) The Honest Politician example decision set demonstrates the concept of truth maintenance with logical insertions and the use of salience in rules. The following is an overview of the Honest Politician example: Name : honestpolitician Main class : org.drools.examples.honestpolitician.HonestPoliticianExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.honestpolitician.HonestPolitician.drl (in src/main/resources ) Objective : Demonstrates the concept of truth maintenance based on the logical insertion of facts and the use of salience in rules The basic premise of the Honest Politician example is that an object can only exist while a statement is true. A rule consequence can logically insert an object with the insertLogical() method. This means the object remains in the KIE session working memory as long as the rule that logically inserted it remains true. When the rule is no longer true, the object is automatically retracted. In this example, rule execution causes a group of politicians to change from being honest to being dishonest as a result of a corrupt corporation. As each politician is evaluated, they start out with their honesty attribute being set to true , but a rule fires that makes the politicians no longer honest. As they switch their state from being honest to dishonest, they are then removed from the working memory. The rule salience notifies the decision engine how to prioritize any rules that have a salience defined for them, otherwise utilizing the default salience value of 0 . Rules with a higher salience value are given higher priority when ordered in the activation queue. Politician and Hope classes The sample class Politician in the example is configured for an honest politician. The Politician class is made up of a String item name and a Boolean item honest : Politician class public class Politician { private String name; private boolean honest; ... } The Hope class determines if a Hope object exists. This class has no meaningful members, but is present in the working memory as long as society has hope. Hope class public class Hope { public Hope() { } } Rule definitions for politician honesty In the Honest Politician example, when at least one honest politician exists in the working memory, the "We have an honest Politician" rule logically inserts a new Hope object. As soon as all politicians become dishonest, the Hope object is automatically retracted. This rule has a salience attribute with a value of 10 to ensure that it fires before any other rule, because at that stage the "Hope is Dead" rule is true. Rule "We have an honest politician" As soon as a Hope object exists, the "Hope Lives" rule matches and fires. This rule also has a salience value of 10 so that it takes priority over the "Corrupt the Honest" rule. Rule "Hope Lives" Initially, four honest politicians exist so this rule has four activations, all in conflict. Each rule fires in turn, corrupting each politician so that they are no longer honest. When all four politicians have been corrupted, no politicians have the property honest == true . The rule "We have an honest Politician" is no longer true and the object it logically inserted (due to the last execution of new Hope() ) is automatically retracted. Rule "Corrupt the Honest" With the Hope object automatically retracted through the truth maintenance system, the conditional element not applied to Hope is no longer true so that the "Hope is Dead" rule matches and fires. Rule "Hope is Dead" Example execution and audit trail In the HonestPoliticianExample.java class, the four politicians with the honest state set to true are inserted for evaluation against the defined business rules: HonestPoliticianExample.java class execution public static void execute( KieContainer kc ) { KieSession ksession = kc.newKieSession("HonestPoliticianKS"); final Politician p1 = new Politician( "President of Umpa Lumpa", true ); final Politician p2 = new Politician( "Prime Minster of Cheeseland", true ); final Politician p3 = new Politician( "Tsar of Pringapopaloo", true ); final Politician p4 = new Politician( "Omnipotence Om", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); ksession.fireAllRules(); ksession.dispose(); } To execute the example, run the org.drools.examples.honestpolitician.HonestPoliticianExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Execution output in the IDE console The output shows that, while there is at least one honest politician, democracy lives. However, as each politician is corrupted by some corporation, all politicians become dishonest, and democracy is dead. To better understand the execution flow of this example, you can modify the HonestPoliticianExample.java class to include a DebugRuleRuntimeEventListener listener and an audit logger to view execution details: HonestPoliticianExample.java class with an audit logger package org.drools.examples.honestpolitician; import org.kie.api.KieServices; import org.kie.api.event.rule.DebugAgendaEventListener; 1 import org.kie.api.event.rule.DebugRuleRuntimeEventListener; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class HonestPoliticianExample { /** * @param args */ public static void main(final String[] args) { KieServices ks = KieServices.Factory.get(); 2 //ks = KieServices.Factory.get(); KieContainer kc = KieServices.Factory.get().getKieClasspathContainer(); System.out.println(kc.verify().getMessages().toString()); //execute( kc ); execute( ks, kc); 3 } public static void execute( KieServices ks, KieContainer kc ) { 4 KieSession ksession = kc.newKieSession("HonestPoliticianKS"); final Politician p1 = new Politician( "President of Umpa Lumpa", true ); final Politician p2 = new Politician( "Prime Minster of Cheeseland", true ); final Politician p3 = new Politician( "Tsar of Pringapopaloo", true ); final Politician p4 = new Politician( "Omnipotence Om", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); // The application can also setup listeners 5 ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. ks.getLoggers().newFileLogger( ksession, "./target/honestpolitician" ); 6 ksession.fireAllRules(); ksession.dispose(); } } 1 Adds to your imports the packages that handle the DebugAgendaEventListener and DebugRuleRuntimeEventListener 2 Creates a KieServices Factory and a ks element to produce the logs because this audit log is not available at the KieContainer level 3 Modifies the execute method to use both KieServices and KieContainer 4 Modifies the execute method to pass in KieServices in addition to the KieContainer 5 Creates the listeners 6 Builds the log that can be passed into the debug view or Audit View or your IDE after executing of the rules When you run the Honest Politician with this modified logging capability, you can load the audit log file from target/honestpolitician.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows the flow of executions, insertions, and retractions as defined in the example classes and rules: Figure 89.18. Honest Politician example Audit View When the first politician is inserted, two activations occur. The rule "We have an honest Politician" is activated only one time for the first inserted politician because it uses an exists conditional element, which matches when at least one politician is inserted. The rule "Hope is Dead" is also activated at this stage because the Hope object is not yet inserted. The rule "We have an honest Politician" fires first because it has a higher salience value than the rule "Hope is Dead" , and inserts the Hope object (highlighted in green). The insertion of the Hope object activates the rule "Hope Lives" and deactivates the rule "Hope is Dead" . The insertion also activates the rule "Corrupt the Honest" for each inserted honest politician. The rule "Hope Lives" is executed and prints "Hurrah!!! Democracy Lives" . , for each politician, the rule "Corrupt the Honest" fires, printing "I'm an evil corporation and I have corrupted X" , where X is the name of the politician, and modifies the politician honesty value to false . When the last honest politician is corrupted, Hope is automatically retracted by the truth maintenance system (highlighted in blue). The green highlighted area shows the origin of the currently selected blue highlighted area. After the Hope fact is retracted, the rule "Hope is dead" fires, printing "We are all Doomed!!! Democracy is Dead" . 89.8. Sudoku example decisions (complex pattern matching, callbacks, and GUI integration) The Sudoku example decision set, based on the popular number puzzle Sudoku, demonstrates how to use rules in Red Hat Decision Manager to find a solution in a large potential solution space based on various constraints. This example also shows how to integrate Red Hat Decision Manager rules into a graphical user interface (GUI), in this case a Swing-based desktop application, and how to use callbacks to interact with a running decision engine to update the GUI based on changes in the working memory at run time. The following is an overview of the Sudoku example: Name : sudoku Main class : org.drools.examples.sudoku.SudokuExample (in src/main/java ) Module : drools-examples Type : Java application Rule files : org.drools.examples.sudoku.*.drl (in src/main/resources ) Objective : Demonstrates complex pattern matching, problem solving, callbacks, and GUI integration Sudoku is a logic-based number placement puzzle. The objective is to fill a 9x9 grid so that each column, each row, and each of the nine 3x3 zones contains the digits from 1 to 9 only one time. The puzzle setter provides a partially completed grid and the puzzle solver's task is to complete the grid with these constraints. The general strategy to solve the problem is to ensure that when you insert a new number, it must be unique in its particular 3x3 zone, row, and column. This Sudoku example decision set uses Red Hat Decision Manager rules to solve Sudoku puzzles from a range of difficulty levels, and to attempt to resolve flawed puzzles that contain invalid entries. Sudoku example execution and interaction Similar to other Red Hat Decision Manager decision examples, you execute the Sudoku example by running the org.drools.examples.sudoku.SudokuExample class as a Java application in your IDE. When you execute the Sudoku example, the Drools Sudoku Example GUI window appears. This window contains an empty grid, but the program comes with various grids stored internally that you can load and solve. Click File Samples Simple to load one of the examples. Notice that all buttons are disabled until a grid is loaded. Figure 89.19. Sudoku example GUI after launch When you load the Simple example, the grid is filled according to the puzzle's initial state. Figure 89.20. Sudoku example GUI after loading Simple sample Choose from the following options: Click Solve to fire the rules defined in the Sudoku example that fill out the remaining values and that make the buttons inactive again. Figure 89.21. Simple sample solved Click Step to see the digit found by the rule set. The console window in your IDE displays detailed information about the rules that are executing to solve the step. Step execution output in the IDE console Click Dump to see the state of the grid, with cells showing either the established value or the remaining possibilities. Dump execution output in the IDE console The Sudoku example includes a deliberately broken sample file that the rules defined in the example can resolve. Click File Samples !DELIBERATELY BROKEN! to load the broken sample. The grid starts with some issues, for example, the value 5 appears two times in the first row, which is not allowed. Figure 89.22. Broken Sudoku example initial state Click Solve to apply the solving rules to this invalid grid. The associated solving rules in the Sudoku example detect the issues in the sample and attempts to solve the puzzle as far as possible. This process does not complete and leaves some cells empty. The solving rule activity is displayed in the IDE console window: Detected issues in the broken sample Figure 89.23. Broken sample solution attempt The sample Sudoku files labeled Hard are more complex and the solving rules might not be able to solve them. The unsuccessful solution attempt is displayed in the IDE console window: Hard sample unresolved The rules that work to solve the broken sample implement standard solving techniques based on the sets of values that are still candidates for a cell. For example, if a set contains a single value, then this is the value for the cell. For a single occurrence of a value in one of the groups of nine cells, the rules insert a fact of type Setting with the solution value for some specific cell. This fact causes the elimination of this value from all other cells in any of the groups the cell belongs to and the value is retracted. Other rules in the example reduce the permissible values for some cells. The rules "naked pair" , "hidden pair in row" , "hidden pair in column" , and "hidden pair in square" eliminate possibilities but do not establish solutions. The rules "X-wings in rows" , "`X-wings in columns"`, "intersection removal row" , and "intersection removal column" perform more sophisticated eliminations. Sudoku example classes The package org.drools.examples.sudoku.swing contains the following core set of classes that implement a framework for Sudoku puzzles: The SudokuGridModel class defines an interface that is implemented to store a Sudoku puzzle as a 9x9 grid of Cell objects. The SudokuGridView class is a Swing component that can visualize any implementation of the SudokuGridModel class. The SudokuGridEvent and SudokuGridListener classes communicate state changes between the model and the view. Events are fired when a cell value is resolved or changed. The SudokuGridSamples class provides partially filled Sudoku puzzles for demonstration purposes. Note This package does not have any dependencies on Red Hat Decision Manager libraries. The package org.drools.examples.sudoku contains the following core set of classes that implement the elementary Cell object and its various aggregations: The CellFile class, with subtypes CellRow , CellCol , and CellSqr , all of which are subtypes of the CellGroup class. The Cell and CellGroup subclasses of SetOfNine , which provides a property free with the type Set<Integer> . For a Cell class, the set represents the individual candidate set. For a CellGroup class, the set is the union of all candidate sets of its cells (the set of digits that still need to be allocated). In the Sudoku example are 81 Cell and 27 CellGroup objects and a linkage provided by the Cell properties cellRow , cellCol , and cellSqr , and by the CellGroup property cells (a list of Cell objects). With these components, you can write rules that detect the specific situations that permit the allocation of a value to a cell or the elimination of a value from some candidate set. The Setting class is used to trigger the operations that accompany the allocation of a value. The presence of a Setting fact is used in all rules that detect a new situation in order to avoid reactions to inconsistent intermediary states. The Stepping class is used in a low priority rule to execute an emergency halt when a "Step" does not terminate regularly. This behavior indicates that the program cannot solve the puzzle. The main class org.drools.examples.sudoku.SudokuExample implements a Java application combining all of these components. Sudoku validation rules (validate.drl) The validate.drl file in the Sudoku example contains validation rules that detect duplicate numbers in cell groups. They are combined in a "validate" agenda group that enables the rules to be explicitly activated after a user loads the puzzle. The when conditions of the three rules "duplicate in cell ... " all function in the following ways: The first condition in the rule locates a cell with an allocated value. The second condition in the rule pulls in any of the three cell groups to which the cell belongs. The final condition finds a cell (other than the first one) with the same value as the first cell and in the same row, column, or square, depending on the rule. Rules "duplicate in cell ... " The rule "terminate group" is the last to fire. This rule prints a message and stops the sequence. Rule "terminate group" Sudoku solving rules (sudoku.drl) The sudoku.drl file in the Sudoku example contains three types of rules: one group handles the allocation of a number to a cell, another group detects feasible allocations, and the third group eliminates values from candidate sets. The rules "set a value" , "eliminate a value from Cell" , and "retract setting" depend on the presence of a Setting object. The first rule handles the assignment to the cell and the operations for removing the value from the free sets of the three groups of the cell. This group also reduces a counter that, when zero, returns control to the Java application that has called fireUntilHalt() . The purpose of the rule "eliminate a value from Cell" is to reduce the candidate lists of all cells that are related to the newly assigned cell. Finally, when all eliminations have been made, the rule "retract setting" retracts the triggering Setting fact. Rules "set a value", "eliminate a value from a Cell", and "retract setting" Two solving rules detect a situation where an allocation of a number to a cell is possible. The rule "single" fires for a Cell with a candidate set containing a single number. The rule "hidden single" fires when no cell exists with a single candidate, but when a cell exists containing a candidate, this candidate is absent from all other cells in one of the three groups to which the cell belongs. Both rules create and insert a Setting fact. Rules "single" and "hidden single" Rules from the largest group, either individually or in groups of two or three, implement various solving techniques used for solving Sudoku puzzles manually. The rule "naked pair" detects identical candidate sets of size 2 in two cells of a group. These two values may be removed from all other candidate sets of that group. Rule "naked pair" The three rules "hidden pair in ... " functions similarly to the rule "naked pair" . These rules detect a subset of two numbers in exactly two cells of a group, with neither value occurring in any of the other cells of the group. This means that all other candidates can be eliminated from the two cells harboring the hidden pair. Rules "hidden pair in ... " Two rules deal with "X-wings" in rows and columns. When only two possible cells for a value exist in each of two different rows (or columns) and these candidates lie also in the same columns (or rows), then all other candidates for this value in the columns (or rows) can be eliminated. When you follow the pattern sequence in one of these rules, notice how the conditions that are conveniently expressed by words such as same or only result in patterns with suitable constraints or that are prefixed with not . Rules "X-wings in ... " The two rules "intersection removal ... " are based on the restricted occurrence of some number within one square, either in a single row or in a single column. This means that this number must be in one of those two or three cells of the row or column and can be removed from the candidate sets of all other cells of the group. The pattern establishes the restricted occurrence and then fires for each cell outside of the square and within the same cell file. Rules "intersection removal ... " These rules are sufficient for many but not all Sudoku puzzles. To solve very difficult grids, the rule set requires more complex rules. (Ultimately, some puzzles can be solved only by trial and error.) 89.9. Conway's Game of Life example decisions (ruleflow groups and GUI integration) The Conway's Game of Life example decision set, based on the famous cellular automaton by John Conway, demonstrates how to use ruleflow groups in rules to control rule execution. The example also demonstrates how to integrate Red Hat Decision Manager rules with a graphical user interface (GUI), in this case a Swing-based implementation of Conway's Game of Life. The following is an overview of the Conway's Game of Life (Conway) example: Name : conway Main classes : org.drools.examples.conway.ConwayRuleFlowGroupRun , org.drools.examples.conway.ConwayAgendaGroupRun (in src/main/java ) Module : droolsjbpm-integration-examples Type : Java application Rule files : org.drools.examples.conway.*.drl (in src/main/resources ) Objective : Demonstrates ruleflow groups and GUI integration Note The Conway's Game of Life example is separate from most of the other example decision sets in Red Hat Decision Manager and is located in ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/droolsjbpm-integration-examples of the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal . In Conway's Game of Life, a user interacts with the game by creating an initial configuration or an advanced pattern with defined properties and then observing how the initial state evolves. The objective of the game is to show the development of a population, generation by generation. Each generation results from the preceding one, based on the simultaneous evaluation of all cells. The following basic rules govern what the generation looks like: If a live cell has fewer than two live neighbors, it dies of loneliness. If a live cell has more than three live neighbors, it dies from overcrowding. If a dead cell has exactly three live neighbors, it comes to life. Any cell that does not meet any of those criteria is left as is for the generation. The Conway's Game of Life example uses Red Hat Decision Manager rules with ruleflow-group attributes to define the pattern implemented in the game. The example also contains a version of the decision set that achieves the same behavior using agenda groups. Agenda groups enable you to partition the decision engine agenda to provide execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. This overview does not explore the version of the Conway example using agenda groups. For more information about agenda groups, see the Red Hat Decision Manager example decision sets that specifically address agenda groups. Conway example execution and interaction Similar to other Red Hat Decision Manager decision examples, you execute the Conway ruleflow example by running the org.drools.examples.conway.ConwayRuleFlowGroupRun class as a Java application in your IDE. When you execute the Conway example, the Conway's Game of Life GUI window appears. This window contains an empty grid, or "arena" where the life simulation takes place. Initially the grid is empty because no live cells are in the system yet. Figure 89.24. Conway example GUI after launch Select a predefined pattern from the Pattern drop-down menu and click Generation to click through each population generation. Each cell is either alive or dead, where live cells contain a green ball. As the population evolves from the initial pattern, cells live or die relative to neighboring cells, according to the rules of the game. Figure 89.25. Generation evolution in Conway example Neighbors include not only cells to the left, right, top, and bottom but also cells that are connected diagonally, so that each cell has a total of eight neighbors. Exceptions are the corner cells, which have only three neighbors, and the cells along the four borders, with five neighbors each. You can manually intervene to create or kill cells by clicking the cell. To run through an evolution automatically from the initial pattern, click Start . Conway example rules with ruleflow groups The rules in the ConwayRuleFlowGroupRun example use ruleflow groups to control rule execution. A ruleflow group is a group of rules associated by the ruleflow-group rule attribute. These rules can only fire when the group is activated. The group itself can only become active when the elaboration of the ruleflow diagram reaches the node representing the group. The Conway example uses the following ruleflow groups for rules: "register neighbor" "evaluate" "calculate" "reset calculate" "birth" "kill" "kill all" All of the Cell objects are inserted into the KIE session and the "register ... " rules in the ruleflow group "register neighbor" are allowed to execute by the ruleflow process. This group of four rules creates Neighbor relations between some cell and its northeastern, northern, northwestern, and western neighbors. This relation is bidirectional and handles the other four directions. Border cells do not require any special treatment. These cells are not paired with neighboring cells where there is not any. By the time all activations have fired for these rules, all cells are related to all their neighboring cells. Rules "register ... " After all the cells are inserted, some Java code applies the pattern to the grid, setting certain cells to Live . Then, when the user clicks Start or Generation , the example executes the Generation ruleflow. This ruleflow manages all changes of cells in each generation cycle. Figure 89.26. Generation ruleflow The ruleflow process enters the "evaluate" ruleflow group and any active rules in the group can fire. The rules "Kill the ... " and "Give Birth" in this group apply the game rules to birth or kill cells. The example uses the phase attribute to drive the reasoning of the Cell object by specific groups of rules. Typically, the phase is tied to a ruleflow group in the ruleflow process definition. Notice that the example does not change the state of any Cell objects at this point because it must complete the full evaluation before those changes can be applied. The example sets the cell to a phase that is either Phase.KILL or Phase.BIRTH , which is used later to control actions applied to the Cell object. Rules "Kill the ... " and "Give Birth" After all Cell objects in the grid have been evaluated, the example uses the "reset calculate" rule to clear any activations in the "calculate" ruleflow group. The example then enters a split in the ruleflow that enables the rules "kill" and "birth" to fire, if the ruleflow group is activated. These rules apply the state change. Rules "reset calculate", "kill", and "birth" At this stage, several Cell objects have been modified with the state changed to either LIVE or DEAD . When a cell becomes live or dead, the example uses the Neighbor relation in the rules "Calculate ... " to iterate over all surrounding cells, increasing or decreasing the liveNeighbor count. Any cell that has its count changed is also set to the EVALUATE phase to make sure it is included in the reasoning during the evaluation stage of the ruleflow process. After the live count has been determined and set for all cells, the ruleflow process ends. If the user initially clicked Start , the decision engine restarts the ruleflow at that point. If the user initially clicked Generation , the user can request another generation. Rules "Calculate ... " 89.10. House of Doom example decisions (backward chaining and recursion) The House of Doom example decision set demonstrates how the decision engine uses backward chaining and recursion to reach defined goals or subgoals in a hierarchical system. The following is an overview of the House of Doom example: Name : backwardchaining Main class : org.drools.examples.backwardchaining.HouseOfDoomMain (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.backwardchaining.BC-Example.drl (in src/main/resources ) Objective : Demonstrates backward chaining and recursion A backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. In contrast, a forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. The decision engine in Red Hat Decision Manager uses both forward and backward chaining to evaluate rules. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 89.27. Rule evaluation logic using forward and backward chaining The House of Doom example uses rules with various types of queries to find the location of rooms and items within the house. The sample class Location.java contains the item and location elements used in the example. The sample class HouseOfDoomMain.java inserts the items or rooms in their respective locations in the house and executes the rules. Items and locations in HouseOfDoomMain.java class ksession.insert( new Location("Office", "House") ); ksession.insert( new Location("Kitchen", "House") ); ksession.insert( new Location("Knife", "Kitchen") ); ksession.insert( new Location("Cheese", "Kitchen") ); ksession.insert( new Location("Desk", "Office") ); ksession.insert( new Location("Chair", "Office") ); ksession.insert( new Location("Computer", "Desk") ); ksession.insert( new Location("Drawer", "Desk") ); The example rules rely on backward chaining and recursion to determine the location of all items and rooms in the house structure. The following diagram illustrates the structure of the House of Doom and the items and rooms within it: Figure 89.28. House of Doom structure To execute the example, run the org.drools.examples.backwardchaining.HouseOfDoomMain class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Execution output in the IDE console All rules in the example have fired to detect the location of all items in the house and to print the location of each in the output. Recursive query and related rules A recursive query repeatedly searches through the hierarchy of a data structure for relationships between elements. In the House of Doom example, the BC-Example.drl file contains an isContainedIn query that most of the rules in the example use to recursively evaluate the house data structure for data inserted into the decision engine: Recursive query in BC-Example.drl The rule "go" prints every string inserted into the system to determine how items are implemented, and the rule "go1" calls the query isContainedIn : Rules "go" and "go1" The example inserts the "go1" string into the decision engine and activates the "go1" rule to detect that item Office is in the location House : Insert string and fire rules Rule "go1" output in the IDE console Transitive closure rule Transitive closure is a relationship between an element contained in a parent element that is multiple levels higher in a hierarchical structure. The rule "go2" identifies the transitive closure relationship of the Drawer and the House : The Drawer is in the Desk in the Office in the House . The example inserts the "go2" string into the decision engine and activates the "go2" rule to detect that item Drawer is ultimately within the location House : Insert string and fire rules Rule "go2" output in the IDE console The decision engine determines this outcome based on the following logic: The query recursively searches through several levels in the house to detect the transitive closure between Drawer and House . Instead of using Location( x, y; ) , the query uses the value of (z, y; ) because Drawer is not directly in House . The z argument is currently unbound, which means it has no value and returns everything that is in the argument. The y argument is currently bound to House , so z returns Office and Kitchen . The query gathers information from the Office and checks recursively if the Drawer is in the Office . The query line isContainedIn( x, z; ) is called for these parameters. No instance of Drawer exists directly in Office , so no match is found. With z unbound, the query returns data within the Office and determines that z == Desk . The isContainedIn query recursively searches three times, and on the third time, the query detects an instance of Drawer in Desk . After this match on the first location, the query recursively searches back up the structure to determine that the Drawer is in the Desk , the Desk is in the Office , and the Office is in the House . Therefore, the Drawer is in the House and the rule is satisfied. Reactive query rule A reactive query searches through the hierarchy of a data structure for relationships between elements and is dynamically updated when elements in the structure are modified. The rule "go3" functions as a reactive query that detects if a new item Key ever becomes present in the Office by transitive closure: A Key in the Drawer in the Office . Rule "go3" The example inserts the "go3" string into the decision engine and activates the "go3" rule. Initially, this rule is not satisfied because no item Key exists in the house structure, so the rule produces no output. Insert string and fire rules Rule "go3" output in the IDE console (unsatisfied) The example then inserts a new item Key in the location Drawer , which is in Office . This change satisfies the transitive closure in the "go3" rule and the output is populated accordingly. Insert new item location and fire rules Rule "go3" output in the IDE console (satisfied) This change also adds another level in the structure that the query includes in subsequent recursive searches. Queries with unbound arguments in rules A query with one or more unbound arguments returns all undefined (unbound) items within a defined (bound) argument of the query. If all arguments in a query are unbound, then the query returns all items within the scope of the query. The rule "go4" uses an unbound argument thing to search for all items within the bound argument Office , instead of using a bound argument to search for a specific item in the Office : Rule "go4" The example inserts the "go4" string into the decision engine and activates the "go4" rule to return all items in the Office : Insert string and fire rules Rule "go4" output in the IDE console The rule "go5" uses both unbound arguments thing and location to search for all items and their locations in the entire House data structure: Rule "go5" The example inserts the "go5" string into the decision engine and activates the "go5" rule to return all items and their locations in the House data structure: Insert string and fire rules Rule "go5" output in the IDE console
[ "KieServices ks = KieServices.Factory.get(); 1 KieContainer kc = ks.getKieClasspathContainer(); 2 KieSession ksession = kc.newKieSession(\"HelloWorldKS\"); 3", "// Set up listeners. ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. KieRuntimeLogger logger = KieServices.get().getLoggers().newFileLogger( ksession, \"./target/helloworld\" ); // Set up a ThreadedFileLogger so that the audit view reflects events while debugging. KieRuntimeLogger logger = ks.getLoggers().newThreadedFileLogger( ksession, \"./target/helloworld\", 1000 );", "// Insert facts into the KIE session. final Message message = new Message(); message.setMessage( \"Hello World\" ); message.setStatus( Message.HELLO ); ksession.insert( message ); // Fire the rules. ksession.fireAllRules();", "public static class Message { public static final int HELLO = 0; public static final int GOODBYE = 1; private String message; private int status; }", "rule \"Hello World\" when m : Message( status == Message.HELLO, message : message ) then System.out.println( message ); modify ( m ) { message = \"Goodbye cruel world\", status = Message.GOODBYE }; end", "rule \"Good Bye\" when Message( status == Message.GOODBYE, message : message ) then System.out.println( message ); end", "Hello World Goodbye cruel world", "==>[ActivationCreated(0): rule=Hello World; tuple=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [ObjectInserted: handle=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]; object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96] [BeforeActivationFired: rule=Hello World; tuple=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] ==>[ActivationCreated(4): rule=Good Bye; tuple=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [ObjectUpdated: handle=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]; old_object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96; new_object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96] [AfterActivationFired(0): rule=Hello World] [BeforeActivationFired: rule=Good Bye; tuple=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [AfterActivationFired(4): rule=Good Bye]", "public class State { public static final int NOTRUN = 0; public static final int FINISHED = 1; private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); private String name; private int state; ... setters and getters go here }", "final State a = new State( \"A\" ); final State b = new State( \"B\" ); final State c = new State( \"C\" ); final State d = new State( \"D\" ); ksession.insert( a ); ksession.insert( b ); ksession.insert( c ); ksession.insert( d ); ksession.fireAllRules(); // Dispose KIE session if stateful (not required if stateless). ksession.dispose();", "A finished B finished C finished D finished", "rule \"Bootstrap\" when a : State(name == \"A\", state == State.NOTRUN ) then System.out.println(a.getName() + \" finished\" ); a.setState( State.FINISHED ); end", "rule \"A to B\" when State(name == \"A\", state == State.FINISHED ) b : State(name == \"B\", state == State.NOTRUN ) then System.out.println(b.getName() + \" finished\" ); b.setState( State.FINISHED ); end", "rule \"B to C\" salience 10 when State(name == \"B\", state == State.FINISHED ) c : State(name == \"C\", state == State.NOTRUN ) then System.out.println(c.getName() + \" finished\" ); c.setState( State.FINISHED ); end rule \"B to D\" when State(name == \"B\", state == State.FINISHED ) d : State(name == \"D\", state == State.NOTRUN ) then System.out.println(d.getName() + \" finished\" ); d.setState( State.FINISHED ); end", "rule \"B to C\" agenda-group \"B to C\" auto-focus true when State(name == \"B\", state == State.FINISHED ) c : State(name == \"C\", state == State.NOTRUN ) then System.out.println(c.getName() + \" finished\" ); c.setState( State.FINISHED ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"B to D\" ).setFocus(); end", "rule \"B to D\" agenda-group \"B to D\" when State(name == \"B\", state == State.FINISHED ) d : State(name == \"D\", state == State.NOTRUN ) then System.out.println(d.getName() + \" finished\" ); d.setState( State.FINISHED ); end", "A finished B finished C finished D finished", "declare type State @propertyChangeSupport end", "public void setState(final int newState) { int oldState = this.state; this.state = newState; this.changes.firePropertyChange( \"state\", oldState, newState ); }", "public static class Fibonacci { private int sequence; private long value; public Fibonacci( final int sequence ) { this.sequence = sequence; this.value = -1; } ... setters and getters go here }", "recurse for 50 recurse for 49 recurse for 48 recurse for 47 recurse for 5 recurse for 4 recurse for 3 recurse for 2 1 == 1 2 == 1 3 == 2 4 == 3 5 == 5 6 == 8 47 == 2971215073 48 == 4807526976 49 == 7778742049 50 == 12586269025", "ksession.insert( new Fibonacci( 50 ) ); ksession.fireAllRules();", "rule \"Recurse\" salience 10 when f : Fibonacci ( value == -1 ) not ( Fibonacci ( sequence == 1 ) ) then insert( new Fibonacci( f.sequence - 1 ) ); System.out.println( \"recurse for \" + f.sequence ); end", "rule \"Bootstrap\" when f : Fibonacci( sequence == 1 || == 2, value == -1 ) // multi-restriction then modify ( f ){ value = 1 }; System.out.println( f.sequence + \" == \" + f.value ); end", "rule \"Calculate\" when // Bind f1 and s1. f1 : Fibonacci( s1 : sequence, value != -1 ) // Bind f2 and v2, refer to bound variable s1. f2 : Fibonacci( sequence == (s1 + 1), v2 : value != -1 ) // Bind f3 and s3, alternative reference of f2.sequence. f3 : Fibonacci( s3 : sequence == (f2.sequence + 1 ), value == -1 ) then // Note the various referencing techniques. modify ( f3 ) { value = f1.value + v2 }; System.out.println( s3 + \" == \" + f3.value ); end", "Cheapest possible BASE PRICE IS: 120 DISCOUNT IS: 20", "template header age[] profile priorClaims policyType base reason package org.drools.examples.decisiontable; template \"Pricing bracket\" age policyType base rule \"Pricing bracket_@{row.rowNumber}\" when Driver(age >= @{age0}, age <= @{age1} , priorClaims == \"@{priorClaims}\" , locationRiskProfile == \"@{profile}\" ) policy: Policy(type == \"@{policyType}\") then policy.setBasePrice(@{base}); System.out.println(\"@{reason}\"); end end template", "template header age[] priorClaims policyType discount package org.drools.examples.decisiontable; template \"discounts\" age priorClaims policyType discount rule \"Discounts_@{row.rowNumber}\" when Driver(age >= @{age0}, age <= @{age1}, priorClaims == \"@{priorClaims}\") policy: Policy(type == \"@{policyType}\") then policy.applyDiscount(@{discount}); end end template", "<kbase name=\"DecisionTableKB\" packages=\"org.drools.examples.decisiontable\"> <ksession name=\"DecisionTableKS\" type=\"stateless\"/> </kbase> <kbase name=\"DTableWithTemplateKB\" packages=\"org.drools.examples.decisiontable-template\"> <ruleTemplate dtable=\"org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls\" template=\"org/drools/examples/decisiontable-template/BasePricing.drt\" row=\"3\" col=\"3\"/> <ruleTemplate dtable=\"org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls\" template=\"org/drools/examples/decisiontable-template/PromotionalPricing.drt\" row=\"18\" col=\"3\"/> <ksession name=\"DTableWithTemplateKS\"/> </kbase>", "DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); Resource xlsRes = ResourceFactory.newClassPathResource( \"ExamplePolicyPricing.xls\", getClass() ); kbuilder.add( xlsRes, ResourceType.DTABLE, dtableconfiguration );", "// KieServices is the factory for all KIE services. KieServices ks = KieServices.Factory.get(); // Create a KIE container on the class path. KieContainer kc = ks.getKieClasspathContainer(); // Create the stock. Vector<Product> stock = new Vector<Product>(); stock.add( new Product( \"Gold Fish\", 5 ) ); stock.add( new Product( \"Fish Tank\", 25 ) ); stock.add( new Product( \"Fish Food\", 2 ) ); // A callback is responsible for populating the working memory and for firing all rules. PetStoreUI ui = new PetStoreUI( stock, new CheckoutCallback( kc ) ); ui.createAndShowGUI();", "public String checkout(JFrame frame, List<Product> items) { Order order = new Order(); // Iterate through list and add to cart. for ( Product p: items ) { order.addItem( new Purchase( order, p ) ); } // Add the JFrame to the ApplicationData to allow for user interaction. // From the KIE container, a KIE session is created based on // its definition and configuration in the META-INF/kmodule.xml file. KieSession ksession = kcontainer.newKieSession(\"PetStoreKS\"); ksession.setGlobal( \"frame\", frame ); ksession.setGlobal( \"textArea\", this.output ); ksession.insert( new Product( \"Gold Fish\", 5 ) ); ksession.insert( new Product( \"Fish Tank\", 25 ) ); ksession.insert( new Product( \"Fish Food\", 2 ) ); ksession.insert( new Product( \"Fish Food Sample\", 0 ) ); ksession.insert( order ); // Execute rules. ksession.fireAllRules(); // Return the state of the cart return order.toString(); }", "package org.drools.examples; import org.kie.api.runtime.KieRuntime; import org.drools.examples.petstore.PetStoreExample.Order; import org.drools.examples.petstore.PetStoreExample.Purchase; import org.drools.examples.petstore.PetStoreExample.Product; import java.util.ArrayList; import javax.swing.JOptionPane; import javax.swing.JFrame; global JFrame frame global javax.swing.JTextArea textArea", "function void doCheckout(JFrame frame, KieRuntime krt) { Object[] options = {\"Yes\", \"No\"}; int n = JOptionPane.showOptionDialog(frame, \"Would you like to checkout?\", \"\", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); if (n == 0) { krt.getAgenda().getAgendaGroup( \"checkout\" ).setFocus(); } } function boolean requireTank(JFrame frame, KieRuntime krt, Order order, Product fishTank, int total) { Object[] options = {\"Yes\", \"No\"}; int n = JOptionPane.showOptionDialog(frame, \"Would you like to buy a tank for your \" + total + \" fish?\", \"Purchase Suggestion\", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); System.out.print( \"SUGGESTION: Would you like to buy a tank for your \" + total + \" fish? - \" ); if (n == 0) { Purchase purchase = new Purchase( order, fishTank ); krt.insert( purchase ); order.addItem( purchase ); System.out.println( \"Yes\" ); } else { System.out.println( \"No\" ); } return true; }", "// Insert each item in the shopping cart into the working memory. rule \"Explode Cart\" agenda-group \"init\" auto-focus true salience 10 when USDorder : Order( grossTotal == -1 ) USDitem : Purchase() from USDorder.items then insert( USDitem ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"show items\" ).setFocus(); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"evaluate\" ).setFocus(); end", "rule \"Show Items\" agenda-group \"show items\" when USDorder : Order() USDp : Purchase( order == USDorder ) then textArea.append( USDp.product + \"\\n\"); end", "// Free fish food sample when users buy a goldfish if they did not already buy // fish food and do not already have a fish food sample. rule \"Free Fish Food Sample\" agenda-group \"evaluate\" 1 when USDorder : Order() not ( USDp : Product( name == \"Fish Food\") && Purchase( product == USDp ) ) 2 not ( USDp : Product( name == \"Fish Food Sample\") && Purchase( product == USDp ) ) 3 exists ( USDp : Product( name == \"Gold Fish\") && Purchase( product == USDp ) ) 4 USDfishFoodSample : Product( name == \"Fish Food Sample\" ); then System.out.println( \"Adding free Fish Food Sample to cart\" ); purchase = new Purchase(USDorder, USDfishFoodSample); insert( purchase ); USDorder.addItem( purchase ); end", "// Suggest a fish tank if users buy more than five goldfish and // do not already have a tank. rule \"Suggest Tank\" agenda-group \"evaluate\" when USDorder : Order() not ( USDp : Product( name == \"Fish Tank\") && Purchase( product == USDp ) ) 1 ArrayList( USDtotal : size > 5 ) from collect( Purchase( product.name == \"Gold Fish\" ) ) 2 USDfishTank : Product( name == \"Fish Tank\" ) then requireTank(frame, kcontext.getKieRuntime(), USDorder, USDfishTank, USDtotal); end", "rule \"do checkout\" when then doCheckout(frame, kcontext.getKieRuntime()); end", "rule \"Gross Total\" agenda-group \"checkout\" when USDorder : Order( grossTotal == -1) Number( total : doubleValue ) from accumulate( Purchase( USDprice : product.price ), sum( USDprice ) ) then modify( USDorder ) { grossTotal = total } textArea.append( \"\\ngross total=\" + total + \"\\n\" ); end rule \"Apply 5% Discount\" agenda-group \"checkout\" when USDorder : Order( grossTotal >= 10 && < 20 ) then USDorder.discountedTotal = USDorder.grossTotal * 0.95; textArea.append( \"discountedTotal total=\" + USDorder.discountedTotal + \"\\n\" ); end rule \"Apply 10% Discount\" agenda-group \"checkout\" when USDorder : Order( grossTotal >= 20 ) then USDorder.discountedTotal = USDorder.grossTotal * 0.90; textArea.append( \"discountedTotal total=\" + USDorder.discountedTotal + \"\\n\" ); end", "Adding free Fish Food Sample to cart SUGGESTION: Would you like to buy a tank for your 6 fish? - Yes", "public class Politician { private String name; private boolean honest; }", "public class Hope { public Hope() { } }", "rule \"We have an honest Politician\" salience 10 when exists( Politician( honest == true ) ) then insertLogical( new Hope() ); end", "rule \"Hope Lives\" salience 10 when exists( Hope() ) then System.out.println(\"Hurrah!!! Democracy Lives\"); end", "rule \"Corrupt the Honest\" when politician : Politician( honest == true ) exists( Hope() ) then System.out.println( \"I'm an evil corporation and I have corrupted \" + politician.getName() ); modify ( politician ) { honest = false }; end", "rule \"Hope is Dead\" when not( Hope() ) then System.out.println( \"We are all Doomed!!! Democracy is Dead\" ); end", "public static void execute( KieContainer kc ) { KieSession ksession = kc.newKieSession(\"HonestPoliticianKS\"); final Politician p1 = new Politician( \"President of Umpa Lumpa\", true ); final Politician p2 = new Politician( \"Prime Minster of Cheeseland\", true ); final Politician p3 = new Politician( \"Tsar of Pringapopaloo\", true ); final Politician p4 = new Politician( \"Omnipotence Om\", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); ksession.fireAllRules(); ksession.dispose(); }", "Hurrah!!! Democracy Lives I'm an evil corporation and I have corrupted President of Umpa Lumpa I'm an evil corporation and I have corrupted Prime Minster of Cheeseland I'm an evil corporation and I have corrupted Tsar of Pringapopaloo I'm an evil corporation and I have corrupted Omnipotence Om We are all Doomed!!! Democracy is Dead", "package org.drools.examples.honestpolitician; import org.kie.api.KieServices; import org.kie.api.event.rule.DebugAgendaEventListener; 1 import org.kie.api.event.rule.DebugRuleRuntimeEventListener; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class HonestPoliticianExample { /** * @param args */ public static void main(final String[] args) { KieServices ks = KieServices.Factory.get(); 2 //ks = KieServices.Factory.get(); KieContainer kc = KieServices.Factory.get().getKieClasspathContainer(); System.out.println(kc.verify().getMessages().toString()); //execute( kc ); execute( ks, kc); 3 } public static void execute( KieServices ks, KieContainer kc ) { 4 KieSession ksession = kc.newKieSession(\"HonestPoliticianKS\"); final Politician p1 = new Politician( \"President of Umpa Lumpa\", true ); final Politician p2 = new Politician( \"Prime Minster of Cheeseland\", true ); final Politician p3 = new Politician( \"Tsar of Pringapopaloo\", true ); final Politician p4 = new Politician( \"Omnipotence Om\", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); // The application can also setup listeners 5 ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. ks.getLoggers().newFileLogger( ksession, \"./target/honestpolitician\" ); 6 ksession.fireAllRules(); ksession.dispose(); } }", "single 8 at [0,1] column elimination due to [1,2]: remove 9 from [4,2] hidden single 9 at [1,2] row elimination due to [2,8]: remove 7 from [2,4] remove 6 from [3,8] due to naked pair at [3,2] and [3,7] hidden pair in row at [4,6] and [4,4]", "Col: 0 Col: 1 Col: 2 Col: 3 Col: 4 Col: 5 Col: 6 Col: 7 Col: 8 Row 0: 123456789 --- 5 --- --- 6 --- --- 8 --- 123456789 --- 1 --- --- 9 --- --- 4 --- 123456789 Row 1: --- 9 --- 123456789 123456789 --- 6 --- 123456789 --- 5 --- 123456789 123456789 --- 3 --- Row 2: --- 7 --- 123456789 123456789 --- 4 --- --- 9 --- --- 3 --- 123456789 123456789 --- 8 --- Row 3: --- 8 --- --- 9 --- --- 7 --- 123456789 --- 4 --- 123456789 --- 6 --- --- 3 --- --- 5 --- Row 4: 123456789 123456789 --- 3 --- --- 9 --- 123456789 --- 6 --- --- 8 --- 123456789 123456789 Row 5: --- 4 --- --- 6 --- --- 5 --- 123456789 --- 8 --- 123456789 --- 2 --- --- 9 --- --- 1 --- Row 6: --- 5 --- 123456789 123456789 --- 2 --- --- 6 --- --- 9 --- 123456789 123456789 --- 7 --- Row 7: --- 6 --- 123456789 123456789 --- 5 --- 123456789 --- 4 --- 123456789 123456789 --- 9 --- Row 8: 123456789 --- 4 --- --- 9 --- --- 7 --- 123456789 --- 8 --- --- 3 --- --- 5 --- 123456789", "cell [0,8]: 5 has a duplicate in row 0 cell [0,0]: 5 has a duplicate in row 0 cell [6,0]: 8 has a duplicate in col 0 cell [4,0]: 8 has a duplicate in col 0 Validation complete.", "Validation complete. Sorry - can't solve this grid.", "rule \"duplicate in cell row\" when USDc: Cell( USDv: value != null ) USDcr: CellRow( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellRow == USDcr ) then System.out.println( \"cell \" + USDc.toString() + \" has a duplicate in row \" + USDcr.getNumber() ); end rule \"duplicate in cell col\" when USDc: Cell( USDv: value != null ) USDcc: CellCol( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellCol == USDcc ) then System.out.println( \"cell \" + USDc.toString() + \" has a duplicate in col \" + USDcc.getNumber() ); end rule \"duplicate in cell sqr\" when USDc: Cell( USDv: value != null ) USDcs: CellSqr( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellSqr == USDcs ) then System.out.println( \"cell \" + USDc.toString() + \" has duplicate in its square of nine.\" ); end", "rule \"terminate group\" salience -100 when then System.out.println( \"Validation complete.\" ); drools.halt(); end", "// A Setting object is inserted to define the value of a Cell. // Rule for updating the cell and all cell groups that contain it rule \"set a value\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // A matching Cell, with no value set USDc: Cell( rowNo == USDrn, colNo == USDcn, value == null, USDcr: cellRow, USDcc: cellCol, USDcs: cellSqr ) // Count down USDctr: Counter( USDcount: count ) then // Modify the Cell by setting its value. modify( USDc ){ setValue( USDv ) } // System.out.println( \"set cell \" + USDc.toString() ); modify( USDcr ){ blockValue( USDv ) } modify( USDcc ){ blockValue( USDv ) } modify( USDcs ){ blockValue( USDv ) } modify( USDctr ){ setCount( USDcount - 1 ) } end // Rule for removing a value from all cells that are siblings // in one of the three cell groups rule \"eliminate a value from Cell\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // The matching Cell, with the value already set Cell( rowNo == USDrn, colNo == USDcn, value == USDv, USDexCells: exCells ) // For all Cells that are associated with the updated cell USDc: Cell( free contains USDv ) from USDexCells then // System.out.println( \"clear \" + USDv + \" from cell \" + USDc.posAsString() ); // Modify a related Cell by blocking the assigned value. modify( USDc ){ blockValue( USDv ) } end // Rule for eliminating the Setting fact rule \"retract setting\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // The matching Cell, with the value already set USDc: Cell( rowNo == USDrn, colNo == USDcn, value == USDv ) // This is the negation of the last pattern in the previous rule. // Now the Setting fact can be safely retracted. not( USDx: Cell( free contains USDv ) and Cell( this == USDc, exCells contains USDx ) ) then // System.out.println( \"done setting cell \" + USDc.toString() ); // Discard the Setter fact. delete( USDs ); // Sudoku.sudoku.consistencyCheck(); end", "// Detect a set of candidate values with cardinality 1 for some Cell. // This is the value to be set. rule \"single\" when // Currently no setting underway not Setting() // One element in the \"free\" set USDc: Cell( USDrn: rowNo, USDcn: colNo, freeCount == 1 ) then Integer i = USDc.getFreeValue(); if (explain) System.out.println( \"single \" + i + \" at \" + USDc.posAsString() ); // Insert another Setter fact. insert( new Setting( USDrn, USDcn, i ) ); end // Detect a set of candidate values with a value that is the only one // in one of its groups. This is the value to be set. rule \"hidden single\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // Some integer USDi: Integer() // The \"free\" set contains this number USDc: Cell( USDrn: rowNo, USDcn: colNo, freeCount > 1, free contains USDi ) // A cell group contains this cell USDc. USDcg: CellGroup( cells contains USDc ) // No other cell from that group contains USDi. not ( Cell( this != USDc, free contains USDi ) from USDcg.getCells() ) then if (explain) System.out.println( \"hidden single \" + USDi + \" at \" + USDc.posAsString() ); // Insert another Setter fact. insert( new Setting( USDrn, USDcn, USDi ) ); end", "// A \"naked pair\" is two cells in some cell group with their sets of // permissible values being equal with cardinality 2. These two values // can be removed from all other candidate lists in the group. rule \"naked pair\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // One cell with two candidates USDc1: Cell( freeCount == 2, USDf1: free, USDr1: cellRow, USDrn1: rowNo, USDcn1: colNo, USDb1: cellSqr ) // The containing cell group USDcg: CellGroup( freeCount > 2, cells contains USDc1 ) // Another cell with two candidates, not the one we already have USDc2: Cell( this != USDc1, free == USDf1 /*** , rowNo >= USDrn1, colNo >= USDcn1 ***/ ) from USDcg.cells // Get one of the \"naked pair\". Integer( USDv: intValue ) from USDc1.getFree() // Get some other cell with a candidate equal to one from the pair. USDc3: Cell( this != USDc1 && != USDc2, freeCount > 1, free contains USDv ) from USDcg.cells then if (explain) System.out.println( \"remove \" + USDv + \" from \" + USDc3.posAsString() + \" due to naked pair at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); // Remove the value. modify( USDc3 ){ blockValue( USDv ) } end", "// If two cells within the same cell group contain candidate sets with more than // two values, with two values being in both of them but in none of the other // cells, then we have a \"hidden pair\". We can remove all other candidates from // these two cells. rule \"hidden pair in row\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // Establish a pair of Integer facts. USDi1: Integer() USDi2: Integer( this > USDi1 ) // Look for a Cell with these two among its candidates. (The upper bound on // the number of candidates avoids a lot of useless work during startup.) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellRow: cellRow ) // Get another one from the same row, with the same pair among its candidates. USDc2: Cell( this != USDc1, cellRow == USDcellRow, freeCount > 2, free contains USDi1 && contains USDi2 ) // Ascertain that no other cell in the group has one of these two values. not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellRow.getCells() ) then if( explain) System.out.println( \"hidden pair in row at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); // Set the candidate lists of these two Cells to the \"hidden pair\". modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end rule \"hidden pair in column\" when not Setting() not Cell( freeCount == 1 ) USDi1: Integer() USDi2: Integer( this > USDi1 ) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellCol: cellCol ) USDc2: Cell( this != USDc1, cellCol == USDcellCol, freeCount > 2, free contains USDi1 && contains USDi2 ) not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellCol.getCells() ) then if (explain) System.out.println( \"hidden pair in column at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end rule \"hidden pair in square\" when not Setting() not Cell( freeCount == 1 ) USDi1: Integer() USDi2: Integer( this > USDi1 ) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellSqr: cellSqr ) USDc2: Cell( this != USDc1, cellSqr == USDcellSqr, freeCount > 2, free contains USDi1 && contains USDi2 ) not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellSqr.getCells() ) then if (explain) System.out.println( \"hidden pair in square \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end", "rule \"X-wings in rows\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() USDca1: Cell( freeCount > 1, free contains USDi, USDra: cellRow, USDrano: rowNo, USDc1: cellCol, USDc1no: colNo ) USDcb1: Cell( freeCount > 1, free contains USDi, USDrb: cellRow, USDrbno: rowNo > USDrano, cellCol == USDc1 ) not( Cell( this != USDca1 && != USDcb1, free contains USDi ) from USDc1.getCells() ) USDca2: Cell( freeCount > 1, free contains USDi, cellRow == USDra, USDc2: cellCol, USDc2no: colNo > USDc1no ) USDcb2: Cell( freeCount > 1, free contains USDi, cellRow == USDrb, cellCol == USDc2 ) not( Cell( this != USDca2 && != USDcb2, free contains USDi ) from USDc2.getCells() ) USDcx: Cell( rowNo == USDrano || == USDrbno, colNo != USDc1no && != USDc2no, freeCount > 1, free contains USDi ) then if (explain) { System.out.println( \"X-wing with \" + USDi + \" in rows \" + USDca1.posAsString() + \" - \" + USDcb1.posAsString() + USDca2.posAsString() + \" - \" + USDcb2.posAsString() + \", remove from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end rule \"X-wings in columns\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() USDca1: Cell( freeCount > 1, free contains USDi, USDc1: cellCol, USDc1no: colNo, USDra: cellRow, USDrano: rowNo ) USDca2: Cell( freeCount > 1, free contains USDi, USDc2: cellCol, USDc2no: colNo > USDc1no, cellRow == USDra ) not( Cell( this != USDca1 && != USDca2, free contains USDi ) from USDra.getCells() ) USDcb1: Cell( freeCount > 1, free contains USDi, cellCol == USDc1, USDrb: cellRow, USDrbno: rowNo > USDrano ) USDcb2: Cell( freeCount > 1, free contains USDi, cellCol == USDc2, cellRow == USDrb ) not( Cell( this != USDcb1 && != USDcb2, free contains USDi ) from USDrb.getCells() ) USDcx: Cell( colNo == USDc1no || == USDc2no, rowNo != USDrano && != USDrbno, freeCount > 1, free contains USDi ) then if (explain) { System.out.println( \"X-wing with \" + USDi + \" in columns \" + USDca1.posAsString() + \" - \" + USDca2.posAsString() + USDcb1.posAsString() + \" - \" + USDcb2.posAsString() + \", remove from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end", "rule \"intersection removal column\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() // Occurs in a Cell USDc: Cell( free contains USDi, USDcs: cellSqr, USDcc: cellCol ) // Does not occur in another cell of the same square and a different column not Cell( this != USDc, free contains USDi, cellSqr == USDcs, cellCol != USDcc ) // A cell exists in the same column and another square containing this value. USDcx: Cell( freeCount > 1, free contains USDi, cellCol == USDcc, cellSqr != USDcs ) then // Remove the value from that other cell. if (explain) { System.out.println( \"column elimination due to \" + USDc.posAsString() + \": remove \" + USDi + \" from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end rule \"intersection removal row\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() // Occurs in a Cell USDc: Cell( free contains USDi, USDcs: cellSqr, USDcr: cellRow ) // Does not occur in another cell of the same square and a different row. not Cell( this != USDc, free contains USDi, cellSqr == USDcs, cellRow != USDcr ) // A cell exists in the same row and another square containing this value. USDcx: Cell( freeCount > 1, free contains USDi, cellRow == USDcr, cellSqr != USDcs ) then // Remove the value from that other cell. if (explain) { System.out.println( \"row elimination due to \" + USDc.posAsString() + \": remove \" + USDi + \" from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end", "rule \"register north east\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorthEast : Cell( row == (USDrow - 1), col == ( USDcol + 1 ) ) then insert( new Neighbor( USDcell, USDnorthEast ) ); insert( new Neighbor( USDnorthEast, USDcell ) ); end rule \"register north\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorth : Cell( row == (USDrow - 1), col == USDcol ) then insert( new Neighbor( USDcell, USDnorth ) ); insert( new Neighbor( USDnorth, USDcell ) ); end rule \"register north west\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorthWest : Cell( row == (USDrow - 1), col == ( USDcol - 1 ) ) then insert( new Neighbor( USDcell, USDnorthWest ) ); insert( new Neighbor( USDnorthWest, USDcell ) ); end rule \"register west\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDwest : Cell( row == USDrow, col == ( USDcol - 1 ) ) then insert( new Neighbor( USDcell, USDwest ) ); insert( new Neighbor( USDwest, USDcell ) ); end", "rule \"Kill The Lonely\" ruleflow-group \"evaluate\" no-loop when // A live cell has fewer than 2 live neighbors. theCell: Cell( liveNeighbors < 2, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule \"Kill The Overcrowded\" ruleflow-group \"evaluate\" no-loop when // A live cell has more than 3 live neighbors. theCell: Cell( liveNeighbors > 3, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule \"Give Birth\" ruleflow-group \"evaluate\" no-loop when // A dead cell has 3 live neighbors. theCell: Cell( liveNeighbors == 3, cellState == CellState.DEAD, phase == Phase.EVALUATE ) then modify( theCell ){ theCell.setPhase( Phase.BIRTH ); } end", "rule \"reset calculate\" ruleflow-group \"reset calculate\" when then WorkingMemory wm = drools.getWorkingMemory(); wm.clearRuleFlowGroup( \"calculate\" ); end rule \"kill\" ruleflow-group \"kill\" no-loop when theCell: Cell( phase == Phase.KILL ) then modify( theCell ){ setCellState( CellState.DEAD ), setPhase( Phase.DONE ); } end rule \"birth\" ruleflow-group \"birth\" no-loop when theCell: Cell( phase == Phase.BIRTH ) then modify( theCell ){ setCellState( CellState.LIVE ), setPhase( Phase.DONE ); } end", "rule \"Calculate Live\" ruleflow-group \"calculate\" lock-on-active when theCell: Cell( cellState == CellState.LIVE ) Neighbor( cell == theCell, USDneighbor : neighbor ) then modify( USDneighbor ){ setLiveNeighbors( USDneighbor.getLiveNeighbors() + 1 ), setPhase( Phase.EVALUATE ); } end rule \"Calculate Dead\" ruleflow-group \"calculate\" lock-on-active when theCell: Cell( cellState == CellState.DEAD ) Neighbor( cell == theCell, USDneighbor : neighbor ) then modify( USDneighbor ){ setLiveNeighbors( USDneighbor.getLiveNeighbors() - 1 ), setPhase( Phase.EVALUATE ); } end", "ksession.insert( new Location(\"Office\", \"House\") ); ksession.insert( new Location(\"Kitchen\", \"House\") ); ksession.insert( new Location(\"Knife\", \"Kitchen\") ); ksession.insert( new Location(\"Cheese\", \"Kitchen\") ); ksession.insert( new Location(\"Desk\", \"Office\") ); ksession.insert( new Location(\"Chair\", \"Office\") ); ksession.insert( new Location(\"Computer\", \"Desk\") ); ksession.insert( new Location(\"Drawer\", \"Desk\") );", "go1 Office is in the House --- go2 Drawer is in the House --- go3 --- Key is in the Office --- go4 Chair is in the Office Desk is in the Office Key is in the Office Computer is in the Office Drawer is in the Office --- go5 Chair is in Office Desk is in Office Drawer is in Desk Key is in Drawer Kitchen is in House Cheese is in Kitchen Knife is in Kitchen Computer is in Desk Office is in House Key is in Office Drawer is in House Computer is in House Key is in House Desk is in House Chair is in House Knife is in House Cheese is in House Computer is in Office Drawer is in Office Key is in Desk", "query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end", "rule \"go\" salience 10 when USDs : String() then System.out.println( USDs ); end rule \"go1\" when String( this == \"go1\" ) isContainedIn(\"Office\", \"House\"; ) then System.out.println( \"Office is in the House\" ); end", "ksession.insert( \"go1\" ); ksession.fireAllRules();", "go1 Office is in the House", "rule \"go2\" when String( this == \"go2\" ) isContainedIn(\"Drawer\", \"House\"; ) then System.out.println( \"Drawer is in the House\" ); end", "ksession.insert( \"go2\" ); ksession.fireAllRules();", "go2 Drawer is in the House", "isContainedIn(x==drawer, z==desk)", "Location(x==drawer, y==desk)", "rule \"go3\" when String( this == \"go3\" ) isContainedIn(\"Key\", \"Office\"; ) then System.out.println( \"Key is in the Office\" ); end", "ksession.insert( \"go3\" ); ksession.fireAllRules();", "go3", "ksession.insert( new Location(\"Key\", \"Drawer\") ); ksession.fireAllRules();", "Key is in the Office", "rule \"go4\" when String( this == \"go4\" ) isContainedIn(thing, \"Office\"; ) then System.out.println( thing + \"is in the Office\" ); end", "ksession.insert( \"go4\" ); ksession.fireAllRules();", "go4 Chair is in the Office Desk is in the Office Key is in the Office Computer is in the Office Drawer is in the Office", "rule \"go5\" when String( this == \"go5\" ) isContainedIn(thing, location; ) then System.out.println(thing + \" is in \" + location ); end", "ksession.insert( \"go5\" ); ksession.fireAllRules();", "go5 Chair is in Office Desk is in Office Drawer is in Desk Key is in Drawer Kitchen is in House Cheese is in Kitchen Knife is in Kitchen Computer is in Desk Office is in House Key is in Office Drawer is in House Computer is in House Key is in House Desk is in House Chair is in House Knife is in House Cheese is in House Computer is in Office Drawer is in Office Key is in Desk" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/decision-examples-ide-con_decision-engine
Chapter 12. Multiple branches in Business Central
Chapter 12. Multiple branches in Business Central Multiple branches support in Business Central provides the ability to create a new branch based on an existing one, including all of its assets. All new, imported, and sample projects open in the default master branch. You can create as many branches as you need and can work on multiple branches interchangeably without impacting the original project on the master branch. Red Hat Process Automation Manager 7.13 includes support for persisting branches, which means that Business Central remembers the last branch used and will open in that branch when you log back in. 12.1. Creating branches You can create new branches in Business Central and name them whatever you like. Initially, you will only have the default master branch. When you create a new branch for a project, you are making a copy of the selected branch. You can make changes to the project on the new branch without impacting the original master branch version. Procedure In Business Central, go to Menu Design Projects . Click the project to create the new branch, for example the Mortgage_Process sample project. Click master Add Branch . Figure 12.1. Create the new branch menu Type testBranch1 in the Name field and select master from the Add Branch window. Where testBranch1 is any name that you want to name the new branch. Select the branch that will be the base for the new branch from the Add Branch window. This can be any existing branch. Click Add . Figure 12.2. Add the new branch window After adding the new branch, you will be redirected to it, and it will contain all of the assets that you had in your project in the master branch. 12.2. Selecting branches You can switch between branches to make modifications to project assets and test the revised functionality. Procedure Click the current branch name and select the desired project branch from the drop-down list. Figure 12.3. Select a branch menu After selecting the branch, you are redirected to that branch containing the project and all of the assets that you had defined. 12.3. Deleting branches You can delete any branch except for the master branch. Business Central does not allow you to delete the master branch to avoid corrupting your environment. You must be in any branch other than master for the following procedure to work. Procedure Click in the upper-right corner of the screen and select Delete Branch . Figure 12.4. Delete a branch In the Delete Branch window, enter the name of the branch you want to delete. Click Delete Branch . The branch is deleted and the project branch switches to the master branch. 12.4. Building and deploying projects After your project is developed, you can build the project from the specified branch in Business Central and deploy it to the configured KIE Server. Procedure In Business Central, go to Menu Design Projects and click the project name. In the upper-right corner, click Deploy to build the project and deploy it to KIE Server. Note You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a corresponding project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option. By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode. If the build fails, address any problems described in the Alerts panel at the bottom of the screen. To review project deployment details, click View deployment details in the deployment banner at the top of the screen or in the Deploy drop-down menu. This option directs you to the Menu Deploy Execution Servers page. For more information about project deployment options, see Packaging and deploying an Red Hat Process Automation Manager project .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/multiple-branches-con
Chapter 7. Available BPF Features
Chapter 7. Available BPF Features This chapter provides the complete list of Berkeley Packet Filter ( BPF ) features available in the kernel of this minor version of Red Hat Enterprise Linux 8. The tables include the lists of: System configuration and other options Available program types and supported helpers Available map types This chapter contains automatically generated output of the bpftool feature command. Table 7.1. System configuration and other options Option Value unprivileged_bpf_disabled 1 (bpf() syscall restricted to privileged users, without recovery) JIT compiler 1 (enabled) JIT compiler hardening 1 (enabled for unprivileged users) JIT compiler kallsyms exports 1 (enabled for root) Memory limit for JIT for unprivileged users 264241152 CONFIG_BPF y CONFIG_BPF_SYSCALL y CONFIG_HAVE_EBPF_JIT y CONFIG_BPF_JIT y CONFIG_BPF_JIT_ALWAYS_ON y CONFIG_DEBUG_INFO_BTF y CONFIG_DEBUG_INFO_BTF_MODULES n CONFIG_CGROUPS y CONFIG_CGROUP_BPF y CONFIG_CGROUP_NET_CLASSID y CONFIG_SOCK_CGROUP_DATA y CONFIG_BPF_EVENTS y CONFIG_KPROBE_EVENTS y CONFIG_UPROBE_EVENTS y CONFIG_TRACING y CONFIG_FTRACE_SYSCALLS y CONFIG_FUNCTION_ERROR_INJECTION y CONFIG_BPF_KPROBE_OVERRIDE y CONFIG_NET y CONFIG_XDP_SOCKETS y CONFIG_LWTUNNEL_BPF y CONFIG_NET_ACT_BPF m CONFIG_NET_CLS_BPF m CONFIG_NET_CLS_ACT y CONFIG_NET_SCH_INGRESS m CONFIG_XFRM y CONFIG_IP_ROUTE_CLASSID y CONFIG_IPV6_SEG6_BPF n CONFIG_BPF_LIRC_MODE2 n CONFIG_BPF_STREAM_PARSER y CONFIG_NETFILTER_XT_MATCH_BPF m CONFIG_BPFILTER n CONFIG_BPFILTER_UMH n CONFIG_TEST_BPF m CONFIG_HZ 1000 bpf() syscall available Large program size limit available Table 7.2. Available program types and supported helpers Program type Available helpers socket_filter bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf kprobe bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_override_return, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sched_cls bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf sched_act bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf xdp bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_redirect, bpf_perf_event_output, bpf_csum_diff, bpf_get_current_task, bpf_get_numa_node_id, bpf_xdp_adjust_head, bpf_redirect_map, bpf_xdp_adjust_meta, bpf_xdp_adjust_tail, bpf_fib_lookup, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf perf_event bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_skb_cgroup_id, bpf_get_local_storage, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sock bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_in bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_out bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_xmit bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sock_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_sock_map_update, bpf_getsockopt, bpf_sock_ops_cb_flags_set, bpf_sock_hash_update, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sk_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_adjust_room, bpf_sk_redirect_map, bpf_sk_redirect_hash, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_device bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sk_msg bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_msg_redirect_hash, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf raw_tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sock_addr bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_getsockopt, bpf_bind, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_seg6local bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lirc_mode2 not supported sk_reuseport bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_skb_load_bytes_relative, bpf_sk_select_reuseport, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf flow_dissector bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sysctl bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf raw_tracepoint_writable bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sockopt bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf tracing not supported struct_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_perf_event_read, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_stackid, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_xdp_adjust_head, bpf_probe_read_str, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_setsockopt, bpf_skb_adjust_room, bpf_redirect_map, bpf_sk_redirect_map, bpf_sock_map_update, bpf_xdp_adjust_meta, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_getsockopt, bpf_override_return, bpf_sock_ops_cb_flags_set, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_bind, bpf_xdp_adjust_tail, bpf_skb_get_xfrm_state, bpf_get_stack, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_sock_hash_update, bpf_msg_redirect_hash, bpf_sk_redirect_hash, bpf_lwt_push_encap, bpf_lwt_seg6_store_bytes, bpf_lwt_seg6_adjust_srh, bpf_lwt_seg6_action, bpf_rc_repeat, bpf_rc_keydown, bpf_skb_cgroup_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_select_reuseport, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_rc_pointer_rel, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_send_signal, bpf_tcp_gen_syncookie, bpf_skb_output, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_tcp_send_ack, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_xdp_output, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_seq_printf, bpf_seq_write, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_get_task_stack, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_inode_storage_get, bpf_inode_storage_delete, bpf_d_path, bpf_copy_from_user, bpf_snprintf_btf, bpf_seq_printf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_bprm_opts_set, bpf_ktime_get_coarse_ns, bpf_ima_inode_hash, bpf_sock_from_file, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_sys_bpf, bpf_btf_find_by_name_kind, bpf_sys_close ext not supported lsm not supported sk_lookup bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf Table 7.3. Available map types Map type Available hash yes array yes prog_array yes perf_event_array yes percpu_hash yes percpu_array yes stack_trace yes cgroup_array yes lru_hash yes lru_percpu_hash yes lpm_trie yes array_of_maps yes hash_of_maps yes devmap yes sockmap yes cpumap yes xskmap yes sockhash yes cgroup_storage yes reuseport_sockarray yes percpu_cgroup_storage yes queue yes stack yes sk_storage yes devmap_hash yes struct_ops no ringbuf yes inode_storage yes task_storage no
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/available_bpf_features
Chapter 3. Adding storage resources for hybrid or Multicloud
Chapter 3. Adding storage resources for hybrid or Multicloud 3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.3, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage Object Storage . Click the Backing Store tab to view all the backing stores. 3.2. Overriding the default backing store You can use the manualDefaultBackingStore flag to override the default NooBaa backing store and remove it if you do not want to use the default backing store configuration. This provides flexibility to customize your backing store configuration and tailor it to your specific needs. By leveraging this feature, you can further optimize your system and enhance its performance. Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Check if noobaa-default-backing-store is present: Patch the NooBaa CR to enable manualDefaultBackingStore : Important Use the Multicloud Object Gateway CLI to create a new backing store and update accounts. Create a new default backing store to override the default backing store. For example: Replace NEW-DEFAULT-BACKING-STORE with the name you want for your new default backing store. Update the admin account to use the new default backing store as its default resource: Replace NEW-DEFAULT-BACKING-STORE with the name of the backing store from the step. Updating the default resource for admin accounts ensures that the new configuration is used throughout your system. Configure the default-bucketclass to use the new default backingstore: Optional: Delete the noobaa-default-backing-store. Delete all instances of and buckets associated with noobaa-default-backing-store and update the accounts using it as resource. Delete the noobaa-default-backing-store: You must enable the manualDefaultBackingStore flag before proceeding. Additionally, it is crucial to update all accounts that use the default resource and delete all instances of and buckets associated with the default backing store to ensure a smooth transition. 3.3. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 3.3.1, "Creating an AWS-backed backingstore" For creating an AWS-STS-backed backingstore, see Section 3.3.2, "Creating an AWS-STS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 3.3.3, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 3.3.4, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 3.3.5, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 3.3.6, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 3.4, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 3.3.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 3.3.2. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 3.3.2.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 3.3.2.2. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 3.3.2.3. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud 3.3.3. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For example, For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.4. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 3.3.5. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.6. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 3.4. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab and search the new Bucket Class. 3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save .
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc get backingstore NAME TYPE PHASE AGE noobaa-default-backing-store pv-pool Creating 102s", "oc patch noobaa/noobaa --type json --patch='[{\"op\":\"add\",\"path\":\"/spec/manualDefaultBackingStore\",\"value\":true}]'", "noobaa backingstore create pv-pool _NEW-DEFAULT-BACKING-STORE_ --num-volumes 1 --pv-size-gb 16", "noobaa account update [email protected] --new_default_resource=_NEW-DEFAULT-BACKING-STORE_", "oc patch Bucketclass noobaa-default-bucket-class -n openshift-storage --type=json --patch='[{\"op\": \"replace\", \"path\": \"/spec/placementPolicy/tiers/0/backingStores/0\", \"value\": \"NEW-DEFAULT-BACKING-STORE\"}]'", "oc delete backingstore noobaa-default-backing-store -n openshift-storage | oc patch -n openshift-storage backingstore/noobaa-default-backing-store --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }", "#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"<service-account-name-1>\" # The service account name of statefulset core and deployment operator (MCG operator) SERVICE_ACCOUNT_NAME_2=\"<service-account-name-2>\" # The service account name of deployment endpoint (MCG endpoint) AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"", "noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"", "noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage", "get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"", "apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/adding-storage-resources-for-hybrid-or-multicloud_rhodf
12.3. Requesting a CA-signed Certificate Through SCEP
12.3. Requesting a CA-signed Certificate Through SCEP The Simple Certificate Enrollment Protocol (SCEP) automates and simplifies the process of certificate management with the CA. It lets a client request and retrieve a certificate over HTTP directly from the CA's SCEP service. This process is secured by a one-time PIN that is usually valid only for a limited time. The following example adds a SCEP CA configuration to certmonger , requests a new certificate, and adds it to the local NSS database. Add the CA configuration to certmonger : -c : Mandatory nickname for the CA configuration. The same value can later be passed to other getcert commands. -u : URL to the server's SCEP interface. Mandatory parameter when using an HTTPS URL: -R CA_Filename : Location of the PEM-formatted copy of the SCEP server's CA certificate, used for the HTTPS encryption. Verify that the CA configuration has been successfully added: The CA configuration was successfully added, when the CA certificate thumbprints were retrieved over SCEP and shown in the command's output. When accessing the server over unencrypted HTTP, manually compare the thumbprints with the ones displayed at the SCEP server to prevent a Man-in-the-middle attack. Request a certificate from the CA: -I : Name of the task. The same value can later be passed to the getcert list command. -c : CA configuration to submit the request to. -d : Directory with the NSS database to store the certificate and key. -n : Nickname of the certificate, used in the NSS database. -N : Subject name in the CSR. -L : Time-limited one-time PIN issued by the CA. Right after submitting the request, you can verify that a certificate was issued and correctly stored in the local database: The status MONITORING signifies a successful retrieval of the issued certificate. The getcert-list(1) man page lists other possible states and their meanings.
[ "getcert add-scep-ca -c CA_Name -u SCEP_URL", "getcert list-cas -c CA_Name CA 'CA_Name': is-default: no ca-type: EXTERNAL helper-location: /usr/libexec/certmonger/scep-submit -u http://SCEP_server_enrollment_interface_URL SCEP CA certificate thumbprint (MD5): A67C2D4B 771AC186 FCCA654A 5E55AAF7 SCEP CA certificate thumbprint (SHA1): FBFF096C 6455E8E9 BD55F4A5 5787C43F 1F512279", "getcert request -I Task_Name -c CA_Name -d /etc/pki/nssdb -n Certificate_Name -N cn=\" Subject Name \" -L one-time_PIN", "getcert list -I TaskName Request ID 'Task_Name': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/pki/nssdb',nickname='TestCert',token='NSS Certificate DB' certificate: type=NSSDB,location='/etc/pki/nssdb',nickname='TestCert',token='NSS Certificate DB' signing request thumbprint (MD5): 503A8EDD DE2BE17E 5BAA3A57 D68C9C1B signing request thumbprint (SHA1): B411ECE4 D45B883A 75A6F14D 7E3037F1 D53625F4 CA: AD-Name issuer: CN=windows-CA,DC=ad,DC=example,DC=com subject: CN=Test Certificate expires: 2018-05-06 10:28:06 UTC key usage: digitalSignature,keyEncipherment eku: iso.org.dod.internet.security.mechanisms.8.2.2 certificate template/profile: IPSECIntermediateOffline pre-save command: post-save command: track: yes auto-renew: yes" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/certmonger-scep
4.8. Starting LVS
4.8. Starting LVS To start LVS, it is best to have two root terminals open simultaneously or two simultaneous root open ssh sessions to the primary LVS router. In one terminal, watch the kernel log messages with the command: tail -f /var/log/messages Then start LVS by typing the following command into the other terminal: /sbin/service pulse start Follow the progress of the pulse service's startup in the terminal with the kernel log messages. When you see the following output, the pulse daemon has started properly: gratuitous lvs arps finished To stop watching /var/log/messages , type Ctrl + c . From this point on, the primary LVS router is also the active LVS router. While you can make requests to LVS at this point, you should start the backup LVS router before putting LVS into service. To do this, simply repeat the process described above on the backup LVS router node. After completing this final step, LVS will be up and running.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-start-vsa
Chapter 9. Admission plugins
Chapter 9. Admission plugins Admission plugins are used to help regulate how OpenShift Container Platform functions. 9.1. About admission plugins Admission plugins intercept requests to the master API to validate resource requests. After a request is authenticated and authorized, the admission plugins ensure that any associated policies are followed. For example, they are commonly used to enforce security policy, resource limitations or configuration requirements. Admission plugins run in sequence as an admission chain. If any admission plugin in the sequence rejects a request, the whole chain is aborted and an error is returned. OpenShift Container Platform has a default set of admission plugins enabled for each resource type. These are required for proper functioning of the cluster. Admission plugins ignore resources that they are not responsible for. In addition to the defaults, the admission chain can be extended dynamically through webhook admission plugins that call out to custom webhook servers. There are two types of webhook admission plugins: a mutating admission plugin and a validating admission plugin. The mutating admission plugin runs first and can both modify resources and validate requests. The validating admission plugin validates requests and runs after the mutating admission plugin so that modifications triggered by the mutating admission plugin can also be validated. Calling webhook servers through a mutating admission plugin can produce side effects on resources related to the target object. In such situations, you must take steps to validate that the end result is as expected. Warning Dynamic admission should be used cautiously because it impacts cluster control plane operations. When calling webhook servers through webhook admission plugins in OpenShift Container Platform 4.17, ensure that you have read the documentation fully and tested for side effects of mutations. Include steps to restore resources back to their original state prior to mutation, in the event that a request does not pass through the entire admission chain. 9.2. Default admission plugins Default validating and admission plugins are enabled in OpenShift Container Platform 4.17. These default plugins contribute to fundamental control plane functionality, such as ingress policy, cluster resource limit override and quota policy. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. The following lists contain the default admission plugins: Example 9.1. Validating admission plugins LimitRanger ServiceAccount PodNodeSelector Priority PodTolerationRestriction OwnerReferencesPermissionEnforcement PersistentVolumeClaimResize RuntimeClass CertificateApproval CertificateSigning CertificateSubjectRestriction autoscaling.openshift.io/ManagementCPUsOverride authorization.openshift.io/RestrictSubjectBindings scheduling.openshift.io/OriginPodNodeEnvironment network.openshift.io/ExternalIPRanger network.openshift.io/RestrictedEndpointsAdmission image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/SCCExecRestrictions route.openshift.io/IngressAdmission config.openshift.io/ValidateAPIServer config.openshift.io/ValidateAuthentication config.openshift.io/ValidateFeatureGate config.openshift.io/ValidateConsole operator.openshift.io/ValidateDNS config.openshift.io/ValidateImage config.openshift.io/ValidateOAuth config.openshift.io/ValidateProject config.openshift.io/DenyDeleteClusterConfiguration config.openshift.io/ValidateScheduler quota.openshift.io/ValidateClusterResourceQuota security.openshift.io/ValidateSecurityContextConstraints authorization.openshift.io/ValidateRoleBindingRestriction config.openshift.io/ValidateNetwork operator.openshift.io/ValidateKubeControllerManager ValidatingAdmissionWebhook ResourceQuota quota.openshift.io/ClusterResourceQuota Example 9.2. Mutating admission plugins NamespaceLifecycle LimitRanger ServiceAccount NodeRestriction TaintNodesByCondition PodNodeSelector Priority DefaultTolerationSeconds PodTolerationRestriction DefaultStorageClass StorageObjectInUseProtection RuntimeClass DefaultIngressClass autoscaling.openshift.io/ManagementCPUsOverride scheduling.openshift.io/OriginPodNodeEnvironment image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/DefaultSecurityContextConstraints MutatingAdmissionWebhook 9.3. Webhook admission plugins In addition to OpenShift Container Platform default admission plugins, dynamic admission can be implemented through webhook admission plugins that call webhook servers, to extend the functionality of the admission chain. Webhook servers are called over HTTP at defined endpoints. There are two types of webhook admission plugins in OpenShift Container Platform: During the admission process, the mutating admission plugin can perform tasks, such as injecting affinity labels. At the end of the admission process, the validating admission plugin can be used to make sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, OpenShift Container Platform schedules the object as configured. When an API request comes in, mutating or validating admission plugins use the list of external webhooks in the configuration and call them in parallel: If all of the webhooks approve the request, the admission chain continues. If any of the webhooks deny the request, the admission request is denied and the reason for doing so is based on the first denial. If more than one webhook denies the admission request, only the first denial reason is returned to the user. If an error is encountered when calling a webhook, the request is either denied or the webhook is ignored depending on the error policy set. If the error policy is set to Ignore , the request is unconditionally accepted in the event of a failure. If the policy is set to Fail , failed requests are denied. Using Ignore can result in unpredictable behavior for all clients. Communication between the webhook admission plugin and the webhook server must use TLS. Generate a CA certificate and use the certificate to sign the server certificate that is used by your webhook admission server. The PEM-encoded CA certificate is supplied to the webhook admission plugin using a mechanism, such as service serving certificate secrets. The following diagram illustrates the sequential admission chain process within which multiple webhook servers are called. Figure 9.1. API admission chain with mutating and validating admission plugins An example webhook admission plugin use case is where all pods must have a common set of labels. In this example, the mutating admission plugin can inject labels and the validating admission plugin can check that labels are as expected. OpenShift Container Platform would subsequently schedule pods that include required labels and reject those that do not. Some common webhook admission plugin use cases include: Namespace reservation. Limiting custom network resources managed by the SR-IOV network device plugin. Defining tolerations that enable taints to qualify which pods should be scheduled on a node. Pod priority class validation. Note The maximum default webhook timeout value in OpenShift Container Platform is 13 seconds, and it cannot be changed. 9.4. Types of webhook admission plugins Cluster administrators can call out to webhook servers through the mutating admission plugin or the validating admission plugin in the API server admission chain. 9.4.1. Mutating admission plugin The mutating admission plugin is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plugin is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification. Sample mutating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None 1 Specifies a mutating admission plugin configuration. 2 The name for the MutatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. Important In OpenShift Container Platform 4.17, objects created by users or control loops through a mutating admission plugin might return unexpected results, especially if values set in an initial request are overwritten, which is not recommended. 9.4.2. Validating admission plugin A validating admission plugin is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plugin, to ensure that all nodeSelector fields are constrained by the node selector restrictions on the namespace. Sample validating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown 1 Specifies a validating admission plugin configuration. 2 The name for the ValidatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. 9.5. Configuring dynamic admission This procedure outlines high-level steps to configure dynamic admission. The functionality of the admission chain is extended by configuring a webhook admission plugin to call out to a webhook server. The webhook server is also configured as an aggregated API server. This allows other OpenShift Container Platform components to communicate with the webhook using internal credentials and facilitates testing using the oc command. Additionally, this enables role based access control (RBAC) into the webhook and prevents token information from other API servers from being disclosed to the webhook. Prerequisites An OpenShift Container Platform account with cluster administrator access. The OpenShift Container Platform CLI ( oc ) installed. A published webhook server container image. Procedure Build a webhook server container image and make it available to the cluster using an image registry. Create a local CA key and certificate and use them to sign the webhook server's certificate signing request (CSR). Create a new project for webhook resources: USD oc new-project my-webhook-namespace 1 1 Note that the webhook server might expect a specific name. Define RBAC rules for the aggregated API service in a file called rbac.yaml : apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server 1 Delegates authentication and authorization to the webhook server API. 2 Allows the webhook server to access cluster resources. 3 Points to resources. This example points to the namespacereservations resource. 4 Enables the aggregated API server to create admission reviews. 5 Points to resources. This example points to the namespacereservations resource. 6 Enables the webhook server to access cluster resources. 7 Role binding to read the configuration for terminating authentication. 8 Default cluster role and cluster role bindings for an aggregated API server. Apply those RBAC rules to the cluster: USD oc auth reconcile -f rbac.yaml Create a YAML file called webhook-daemonset.yaml that is used to deploy a webhook as a daemon set server in a namespace: apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: "true" spec: selector: matchLabels: server: "true" template: metadata: name: server labels: server: "true" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert 1 Note that the webhook server might expect a specific container name. 2 Points to a webhook server container image. Replace <image_registry_username>/<image_path>:<tag> with the appropriate value. 3 Specifies webhook container run commands. Replace <container_commands> with the appropriate value. 4 Defines the target port within pods. This example uses port 8443. 5 Specifies the port used by the readiness probe. This example uses port 8443. Deploy the daemon set: USD oc apply -f webhook-daemonset.yaml Define a secret for the service serving certificate signer, within a YAML file called webhook-secret.yaml : apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2 1 References the signed webhook server certificate. Replace <server_certificate> with the appropriate certificate in base64 format. 2 References the signed webhook server key. Replace <server_key> with the appropriate key in base64 format. Create the secret: USD oc apply -f webhook-secret.yaml Define a service account and service, within a YAML file called webhook-service.yaml : apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: "true" ports: - port: 443 1 targetPort: 8443 2 1 Defines the port that the service listens on. This example uses port 443. 2 Defines the target port within pods that the service forwards connections to. This example uses port 8443. Expose the webhook server within the cluster: USD oc apply -f webhook-service.yaml Define a custom resource definition for the webhook server, in a file called webhook-crd.yaml : apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7 1 Reflects CustomResourceDefinition spec values and is in the format <plural>.<group> . This example uses the namespacereservations resource. 2 REST API group name. 3 REST API version name. 4 Accepted values are Namespaced or Cluster . 5 Plural name to be included in URL. 6 Alias seen in oc output. 7 The reference for resource manifests. Apply the custom resource definition: USD oc apply -f webhook-crd.yaml Configure the webhook server also as an aggregated API server, within a file called webhook-api-service.yaml : apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1 1 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. Deploy the aggregated API service: USD oc apply -f webhook-api-service.yaml Define the webhook admission plugin configuration within a file called webhook-config.yaml . This example uses the validating admission plugin: apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - "*" resources: - projectrequests - operations: - CREATE apiGroups: - "" apiVersions: - "*" resources: - namespaces failurePolicy: Fail 1 Name for the ValidatingWebhookConfiguration object. This example uses the namespacereservations resource. 2 Name of the webhook to call. This example uses the namespacereservations resource. 3 Enables access to the webhook server through the aggregated API. 4 The webhook URL used for admission requests. This example uses the namespacereservation resource. 5 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. Deploy the webhook: USD oc apply -f webhook-config.yaml Verify that the webhook is functioning as expected. For example, if you have configured dynamic admission to reserve specific namespaces, confirm that requests to create those namespaces are rejected and that requests to create non-reserved namespaces succeed. 9.6. Additional resources Configuring the SR-IOV Network Operator Controlling pod placement using node taints Pod priority names
[ "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown", "oc new-project my-webhook-namespace 1", "apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server", "oc auth reconcile -f rbac.yaml", "apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert", "oc apply -f webhook-daemonset.yaml", "apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2", "oc apply -f webhook-secret.yaml", "apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2", "oc apply -f webhook-service.yaml", "apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7", "oc apply -f webhook-crd.yaml", "apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1", "oc apply -f webhook-api-service.yaml", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail", "oc apply -f webhook-config.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/architecture/admission-plug-ins
Chapter 57. DeploymentTemplate schema reference
Chapter 57. DeploymentTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate Full list of DeploymentTemplate schema properties Use deploymentStrategy to specify the strategy used to replace old pods with new ones when deployment configuration changes. Use one of the following values: RollingUpdate : Pods are restarted with zero downtime. Recreate : Pods are terminated before new ones are created. Using the Recreate deployment strategy has the advantage of not requiring spare resources, but the disadvantage is the application downtime. Example showing the deployment strategy set to Recreate . # ... template: deployment: deploymentStrategy: Recreate # ... This configuration change does not cause a rolling update. 57.1. DeploymentTemplate schema properties Property Property type Description metadata MetadataTemplate Metadata applied to the resource. deploymentStrategy string (one of [RollingUpdate, Recreate]) Pod replacement strategy for deployment configuration changes. Valid values are RollingUpdate and Recreate . Defaults to RollingUpdate .
[ "template: deployment: deploymentStrategy: Recreate" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-deploymenttemplate-reference
7.119. libsemanage
7.119. libsemanage 7.119.1. RHBA-2013:0465 - libsemanage bug fix update Updated libsemanage packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The libsemanage library provides an API for the manipulation of SELinux binary policies. It is used by checkpolicy (the policy compiler) and similar tools, as well as by programs such as load_policy, which must perform specific transformations on binary policies (for example, customizing policy boolean settings). Bug Fixes BZ#798332 Previously, the "usepasswd" parameter was not available in the /etc/selinux/semanage.conf file. This update adds the missing "usepasswd" parameter to this file. BZ# 829378 When a custom SELinux policy module was loaded with an error, an error message that was not very informative was returned. This update fixes the error message to be more helpful for users. All users of libsemanage are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libsemanage
probe::tcpmib.AttemptFails
probe::tcpmib.AttemptFails Name probe::tcpmib.AttemptFails - Count a failed attempt to open a socket Synopsis tcpmib.AttemptFails Values op value to be added to the counter (default value of 1) sk pointer to the struct sock being acted on Description The packet pointed to by skb is filtered by the function tcpmib_filter_key . If the packet passes the filter is is counted in the global AttemptFails (equivalent to SNMP's MIB TCP_MIB_ATTEMPTFAILS)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcpmib-attemptfails
4.3. Virtual Database Connection Type
4.3. Virtual Database Connection Type Once your VDB is deployed, you can configure a property in it called connection type . By setting this property, you can determine what type of connections can be made to the VDB. You can set it to one of the following: NONE : disallow new connections. BY_VERSION : (the default setting) allow connections only if the version is specified or if this is the earliest BY_VERSION VDB and there are no VDBs marked as ANY. ANY : allow connections with or without a version specified. If you only want to migrate a few of your applications to the new version of the VDB, then set it to BY_VERSION. This ensures that only applications that know of the new version may use it. If only a select few applications are to remain on the current VDB version, then you will need to update their connection settings to reference the current VDB by its version. The newly deployed VDB will then have its connection type set to ANY, which allows all new connections to be made against the newer version. If you need to undertake a rollback in this scenario, then the newly-deployed VDB will, accordingly, have its connection type set to NONE or BY_VERSION.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/virtual_database_connection_type
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_debezium_on_openshift/pr01
Chapter 10. Precaching glance images into nova
Chapter 10. Precaching glance images into nova When you configure OpenStack Compute to use local ephemeral storage, glance images are cached to quicken the deployment of instances. If an image that is necessary for an instance is not already cached, it is downloaded to the local disk of the Compute node when you create the instance. The process of downloading a glance image takes a variable amount of time, depending on the image size and network characteristics such as bandwidth and latency. If you attempt to start an instance, and the image is not available on the on the Ceph cluster that is local, launching an instance will fail with the following message: You see the following in the Compute service log: The instance fails to start due to a parameter in the nova.conf configuration file called never_download_image_if_on_rbd , which is set to true by default for DCN deployments. You can control this value using the heat parameter NovaDisableImageDownloadToRbd which you can find in the dcn-storage.yaml file. If you set the value of NovaDisableImageDownloadToRbd to false prior to deploying the overcloud, the following occurs: The Compute service (nova) will automatically stream images available at the central location if they are not available locally. You will not be using a COW copy from glance images. The Compute (nova) storage will potentially contain multiple copies of the same image, depending on the number of instances using it. You may saturate both the WAN link to the central location as well as the nova storage pool. Red Hat recommends leaving this value set to true, and ensuring required images are available locally prior to launching an instance. For more information on making images available to the edge, see Section A.1.3, "Copying an image to a new site" . For images that are local, you can speed up the creation of VMs by using the tripleo_nova_image_cache.yml ansible playbook to pre-cache commonly used images or images that are likely to be deployed in the near future. 10.1. Running the tripleo_nova_image_cache.yml ansible playbook Prerequisites Authentication credentials to the correct API in the shell environment. Before the command provided in each step, you must ensure that the correct authentication file is sourced. Procedure Create an ansible inventory directory for your overcloud stacks: Create a list of image IDs that you want to pre-cache: Retrieve a comprehensive list of available images: Create an ansible playbook argument file called nova_cache_args.yml , and add the IDs of the images that you want to pre-cache: Run the tripleo_nova_image_cache.yml ansible playbook: 10.2. Performance considerations You can specify the number of images that you want to download concurrently with the ansible forks parameter, which defaults to a value of 5 . You can reduce the time to distribute this image by increasing the value of the forks parameter, however you must balance this with the increase in network and glance-api load. Use the --forks parameter to adjust concurrency as shown: 10.3. Optimizing the image distribution to DCN sites You can reduce WAN traffic by using a proxy for glance image distribution. When you configure a proxy: Glance images are downloaded to a single Compute node that acts as the proxy. The proxy redistributes the glance image to other Compute nodes in the inventory. You can place the following parameters in the nova_cache_args.yml ansible argument file to configure a proxy node. Set the tripleo_nova_image_cache_use_proxy parameter to true to enable the image cache proxy. The image proxy uses secure copy scp to distribute images to other nodes in the inventory. SCP is inefficient over networks with high latency, such as a WAN between DCN sites. Red Hat recommends that you limit the playbook target to a single DCN location, which correlates to a single stack. Use the tripleo_nova_image_cache_proxy_hostname parameter to select the image cache proxy. The default proxy is the first compute node in the ansible inventory file. Use the tripleo_nova_image_cache_plan parameter to limit the playbook inventory to a single site: 10.4. Configuring the nova-cache cleanup A background process runs periodically to remove images from the nova cache when both of the following conditions are true: The image is not in use by an instance. The age of the image is greater than the value for the nova parameter remove_unused_original_minimum_age_seconds . The default value for the remove_unused_original_minimum_age_seconds parameter is 86400 . The value is expressed in seconds and is equal to 24 hours. You can control this value with the NovaImageCachTTL tripleo-heat-templates parameter during the initial deployment, or during a stack update of your cloud: When you instruct the playbook to pre-cache an image that already exists on a Compute node, ansible does not report a change, but the age of the image is reset to 0. Run the ansible play more frequently than the value of the NovaImageCacheTTL parameter to maintain a cache of images.
[ "Build of instance 3c04e982-c1d1-4364-b6bd-f876e399325b aborted: Image 20c5ff9d-5f54-4b74-830f-88e78b9999ed is unacceptable: No image locations are accessible", "'Image %s is not on my ceph and [workarounds]/ never_download_image_if_on_rbd=True; refusing to fetch and upload.',", "mkdir inventories find ~/overcloud-deploy/*/config-download -name tripleo-ansible-inventory.yaml | while read f; do cp USDf inventories/USD(basename USD(dirname USDf)).yaml; done", "source centralrc openstack image list +--------------------------------------+---------+--------+ | ID | Name | Status | +--------------------------------------+---------+--------+ | 07bc2424-753b-4f65-9da5-5a99d8383fe6 | image_0 | active | | d5187afa-c821-4f22-aa4b-4e76382bef86 | image_1 | active | +--------------------------------------+---------+--------+", "--- tripleo_nova_image_cache_images: - id: 07bc2424-753b-4f65-9da5-5a99d8383fe6 - id: d5187afa-c821-4f22-aa4b-4e76382bef86", "source centralrc ansible-playbook -i inventories --extra-vars \"@nova_cache_args.yml\" /usr/share/ansible/tripleo-playbooks/tripleo_nova_image_cache.yml", "ansible-playbook -i inventory.yaml --forks 10 --extra-vars \"@nova_cache_args.yml\" /usr/share/ansible/tripleo-playbooks/tripleo_nova_image_cache.yml", "tripleo_nova_image_cache_use_proxy: true tripleo_nova_image_cache_proxy_hostname: dcn0-novacompute-1 tripleo_nova_image_cache_plan: dcn0", "parameter_defaults: NovaImageCacheTTL: 604800 # Default to 7 days for all compute roles Compute2Parameters: NovaImageCacheTTL: 1209600 # Override to 14 days for the Compute2 compute role" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/precaching-glance-images-into-nova
Chapter 5. View OpenShift Data Foundation Topology
Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_power/viewing-odf-topology_mcg-verify
Chapter 4. FlowSchema [flowcontrol.apiserver.k8s.io/v1]
Chapter 4. FlowSchema [flowcontrol.apiserver.k8s.io/v1] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowSchemaSpec describes how the FlowSchema's specification looks like. status object FlowSchemaStatus represents the current state of a FlowSchema. 4.1.1. .spec Description FlowSchemaSpec describes how the FlowSchema's specification looks like. Type object Required priorityLevelConfiguration Property Type Description distinguisherMethod object FlowDistinguisherMethod specifies the method of a flow distinguisher. matchingPrecedence integer matchingPrecedence is used to choose among the FlowSchemas that match a given request. The chosen FlowSchema is among those with the numerically lowest (which we take to be logically highest) MatchingPrecedence. Each MatchingPrecedence value must be ranged in [1,10000]. Note that if the precedence is not specified, it will be set to 1000 as default. priorityLevelConfiguration object PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. rules array rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. rules[] object PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. 4.1.2. .spec.distinguisherMethod Description FlowDistinguisherMethod specifies the method of a flow distinguisher. Type object Required type Property Type Description type string type is the type of flow distinguisher method The supported types are "ByUser" and "ByNamespace". Required. 4.1.3. .spec.priorityLevelConfiguration Description PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. Type object Required name Property Type Description name string name is the name of the priority level configuration being referenced Required. 4.1.4. .spec.rules Description rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. Type array 4.1.5. .spec.rules[] Description PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. Type object Required subjects Property Type Description nonResourceRules array nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. nonResourceRules[] object NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. resourceRules array resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. resourceRules[] object ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. subjects array subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. subjects[] object Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. 4.1.6. .spec.rules[].nonResourceRules Description nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. Type array 4.1.7. .spec.rules[].nonResourceRules[] Description NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. Type object Required verbs nonResourceURLs Property Type Description nonResourceURLs array (string) nonResourceURLs is a set of url prefixes that a user should have access to and may not be empty. For example: - "/healthz" is legal - "/hea*" is illegal - "/hea" is legal but matches nothing - "/hea/ " also matches nothing - "/healthz/ " matches all per-component health checks. "*" matches all non-resource urls. if it is present, it must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs. If it is present, it must be the only entry. Required. 4.1.8. .spec.rules[].resourceRules Description resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. Type array 4.1.9. .spec.rules[].resourceRules[] Description ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. Type object Required verbs apiGroups resources Property Type Description apiGroups array (string) apiGroups is a list of matching API groups and may not be empty. "*" matches all API groups and, if present, must be the only entry. Required. clusterScope boolean clusterScope indicates whether to match requests that do not specify a namespace (which happens either because the resource is not namespaced or the request targets all namespaces). If this field is omitted or false then the namespaces field must contain a non-empty list. namespaces array (string) namespaces is a list of target namespaces that restricts matches. A request that specifies a target namespace matches only if either (a) this list contains that target namespace or (b) this list contains " ". Note that " " matches any specified namespace but does not match a request that does not specify a namespace (see the clusterScope field for that). This list may be empty, but only if clusterScope is true. resources array (string) resources is a list of matching resources (i.e., lowercase and plural) with, if desired, subresource. For example, [ "services", "nodes/status" ]. This list may not be empty. "*" matches all resources and, if present, must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs and, if present, must be the only entry. Required. 4.1.10. .spec.rules[].subjects Description subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. Type array 4.1.11. .spec.rules[].subjects[] Description Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. Type object Required kind Property Type Description group object GroupSubject holds detailed information for group-kind subject. kind string kind indicates which one of the other fields is non-empty. Required serviceAccount object ServiceAccountSubject holds detailed information for service-account-kind subject. user object UserSubject holds detailed information for user-kind subject. 4.1.12. .spec.rules[].subjects[].group Description GroupSubject holds detailed information for group-kind subject. Type object Required name Property Type Description name string name is the user group that matches, or "*" to match all user groups. See https://github.com/kubernetes/apiserver/blob/master/pkg/authentication/user/user.go for some well-known group names. Required. 4.1.13. .spec.rules[].subjects[].serviceAccount Description ServiceAccountSubject holds detailed information for service-account-kind subject. Type object Required namespace name Property Type Description name string name is the name of matching ServiceAccount objects, or "*" to match regardless of name. Required. namespace string namespace is the namespace of matching ServiceAccount objects. Required. 4.1.14. .spec.rules[].subjects[].user Description UserSubject holds detailed information for user-kind subject. Type object Required name Property Type Description name string name is the username that matches, or "*" to match all usernames. Required. 4.1.15. .status Description FlowSchemaStatus represents the current state of a FlowSchema. Type object Property Type Description conditions array conditions is a list of the current states of FlowSchema. conditions[] object FlowSchemaCondition describes conditions for a FlowSchema. 4.1.16. .status.conditions Description conditions is a list of the current states of FlowSchema. Type array 4.1.17. .status.conditions[] Description FlowSchemaCondition describes conditions for a FlowSchema. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 4.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas DELETE : delete collection of FlowSchema GET : list or watch objects of kind FlowSchema POST : create a FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas GET : watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name} DELETE : delete a FlowSchema GET : read the specified FlowSchema PATCH : partially update the specified FlowSchema PUT : replace the specified FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas/{name} GET : watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name}/status GET : read status of the specified FlowSchema PATCH : partially update status of the specified FlowSchema PUT : replace status of the specified FlowSchema 4.2.1. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas HTTP method DELETE Description delete collection of FlowSchema Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind FlowSchema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK FlowSchemaList schema 401 - Unauthorized Empty HTTP method POST Description create a FlowSchema Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body FlowSchema schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 202 - Accepted FlowSchema schema 401 - Unauthorized Empty 4.2.2. /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas HTTP method GET Description watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the FlowSchema HTTP method DELETE Description delete a FlowSchema Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FlowSchema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FlowSchema Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FlowSchema Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body FlowSchema schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty 4.2.4. /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the FlowSchema HTTP method GET Description watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name}/status Table 4.19. Global path parameters Parameter Type Description name string name of the FlowSchema HTTP method GET Description read status of the specified FlowSchema Table 4.20. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified FlowSchema Table 4.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.22. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified FlowSchema Table 4.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.24. Body parameters Parameter Type Description body FlowSchema schema Table 4.25. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/schedule_and_quota_apis/flowschema-flowcontrol-apiserver-k8s-io-v1
Chapter 1. Introduction to Red Hat Satellite
Chapter 1. Introduction to Red Hat Satellite Red Hat Satellite is a system management solution that enables you to deploy, configure, and maintain your systems across physical, virtual, and cloud environments. Satellite provides provisioning, remote management and monitoring of multiple Red Hat Enterprise Linux deployments with a single, centralized tool. Satellite Server synchronizes the content from Red Hat Customer Portal and other sources, and provides functionality including fine-grained life cycle management, user and group role-based access control, integrated subscription management, as well as advanced GUI, CLI, or API access. Capsule Server mirrors content from Satellite Server to facilitate content federation across various geographical locations. Host systems can pull content and configuration from Capsule Server in their location and not from the central Satellite Server. Capsule Server also provides localized services such as Puppet server, DHCP, DNS, or TFTP. Capsule Servers assist you in scaling your Satellite environment as the number of your managed systems increases. Capsule Servers decrease the load on the central server, increase redundancy, and reduce bandwidth usage. For more information, see Chapter 2, Capsule Server Overview . 1.1. System Architecture The following diagram represents the high-level architecture of Red Hat Satellite. Figure 1.1. Red Hat Satellite System Architecture There are four stages through which content flows in this architecture: External Content Sources The Satellite Server can consume diverse types of content from various sources. The Red Hat Customer Portal is the primary source of software packages, errata, and container images. In addition, you can use other supported content sources (Git repositories, Docker Hub, Puppet Forge, SCAP repositories) as well as your organization's internal data store. Satellite Server The Satellite Server enables you to plan and manage the content life cycle and the configuration of Capsule Servers and hosts through GUI, CLI, or API. Satellite Server organizes the life cycle management by using organizations as principal division units. Organizations isolate content for groups of hosts with specific requirements and administration tasks. For example, the OS build team can use a different organization than the web development team. Satellite Server also contains a fine-grained authentication system to provide Satellite operators with permissions to access precisely the parts of the infrastructure that lie in their area of responsibility. Capsule Servers Capsule Servers mirror content from Satellite Server to establish content sources in various geographical locations. This enables host systems to pull content and configuration from Capsule Servers in their location and not from the central Satellite Server. The recommended minimum number of Capsule Servers is therefore given by the number of geographic regions where the organization that uses Satellite operates. Using Content Views, you can specify the exact subset of content that Capsule Server makes available to hosts. See Figure 1.2, "Content Life Cycle in Red Hat Satellite" for a closer look at life cycle management with the use of Content Views. The communication between managed hosts and Satellite Server is routed through Capsule Server that can also manage multiple services on behalf of hosts. Many of these services use dedicated network ports, but Capsule Server ensures that a single source IP address is used for all communications from the host to Satellite Server, which simplifies firewall administration. For more information on Capsule Servers see Chapter 2, Capsule Server Overview . Managed Hosts Hosts are the recipients of content from Capsule Servers. Hosts can be either physical or virtual. Satellite Server can have directly managed hosts. The base system running a Capsule Server is also a managed host of Satellite Server. The following diagram provides a closer look at the distribution of content from Satellite Server to Capsules. Figure 1.2. Content Life Cycle in Red Hat Satellite By default, each organization has a Library of content from external sources. Content Views are subsets of content from the Library created by intelligent filtering. You can publish and promote Content Views into life cycle environments (typically Dev, QA, and Production). When creating a Capsule Server, you can choose which life cycle environments will be copied to that Capsule and made available to managed hosts. Content Views can be combined to create Composite Content Views. It can be beneficial to have a separate Content View for a repository of packages required by an operating system and a separate one for a repository of packages required by an application. One advantage is that any updates to packages in one repository only requires republishing the relevant Content View. You can then use Composite Content Views to combine published Content Views for ease of management. Which Content Views should be promoted to which Capsule Server depends on the Capsule's intended functionality. Any Capsule Server can run DNS, DHCP, and TFTP as infrastructure services that can be supplemented, for example, with content or configuration services. You can update Capsule Server by creating a new version of a Content View using synchronized content from the Library. The new Content View version is then promoted through life cycle environments. You can also create in-place updates of Content Views. This means creating a minor version of the Content View in its current life cycle environment without promoting it from the Library. For example, if you need to apply a security erratum to a Content View used in Production, you can update the Content View directly without promoting to other life cycles. For more information on content management, see Managing Content . 1.2. System Components Red Hat Satellite consists of several open source projects which are integrated, verified, delivered and supported as Satellite. This information is maintained and regularly updated on the Red Hat Customer Portal; see Satellite 6 Component Versions . Red Hat Satellite consists of the following open source projects: Foreman Foreman is an open source application used for provisioning and life cycle management of physical and virtual systems. Foreman automatically configures these systems using various methods, including kickstart and Puppet modules. Foreman also provides historical data for reporting, auditing, and troubleshooting. Katello Katello is a Foreman plug-in for subscription and repository management. It provides a means to subscribe to Red Hat repositories and download content. You can create and manage different versions of this content and apply them to specific systems within user-defined stages of the application life cycle. Candlepin Candlepin is a service within Katello that handles subscription management. Pulp Pulp is a service within Katello that handles repository and content management. Pulp ensures efficient storage space by not duplicating RPM packages even when requested by Content Views in different organizations. Hammer Hammer is a CLI tool that provides command line and shell equivalents of most Satellite web UI functions. REST API Red Hat Satellite includes a RESTful API service that allows system administrators and developers to write custom scripts and third-party applications that interface with Red Hat Satellite. The terminology used in Red Hat Satellite and its components is extensive. For explanations of frequently used terms, see Appendix B, Glossary of Terms . 1.3. Supported Usage Each Red Hat Satellite subscription includes one supported instance of Red Hat Enterprise Linux Server. This instance should be reserved solely for the purpose of running Red Hat Satellite. Using the operating system included with Satellite to run other daemons, applications, or services within your environment is not supported. Support for Red Hat Satellite components is described below. SELinux must be either in enforcing or permissive mode, installation with disabled SELinux is not supported. Puppet Red Hat Satellite includes supported Puppet packages. The installation program allows users to install and configure Puppet servers as a part of Capsule Servers. A Puppet module, running on a Puppet server on the Satellite Server or Satellite Capsule Server, is also supported by Red Hat. For information on what versions of Puppet are supported, see the Red Hat Knowledgebase article Satellite 6 Component Versions . Red Hat supports many different scripting and other frameworks, including Puppet modules. Support for these frameworks is based on the Red Hat Knowledgebase article How does Red Hat support scripting frameworks . Pulp Pulp usage is only supported via Satellite web UI, CLI, and API. Direct modification or interaction with Pulp's local API or database is not supported, as this can cause irreparable damage to the Red Hat Satellite databases. Foreman Foreman can be extended using plug-ins, but only plug-ins packaged with Red Hat Satellite are supported. Red Hat does not support plug-ins in the Red Hat Satellite Optional repository. Red Hat Satellite also includes components, configuration and functionality to provision and configure operating systems other than Red Hat Enterprise Linux. While these features are included and can be employed, Red Hat supports their usage for Red Hat Enterprise Linux. Candlepin The only supported methods of using Candlepin are through the Satellite web UI, CLI, and API. Red Hat does not support direct interaction with Candlepin, its local API or database, as this can cause irreparable damage to the Red Hat Satellite databases. Embedded Tomcat Application Server The only supported methods of using the embedded Tomcat application server are through the Satellite web UI, API, and database. Red Hat does not support direct interaction with the embedded Tomcat application server's local API or database. Note Usage of all Red Hat Satellite components is supported within the context of Red Hat Satellite only. Third-party usage of any components falls beyond supported usage. 1.4. Supported Client Architectures 1.4.1. Content Management Supported combinations of major versions of Red Hat Enterprise Linux and hardware architectures for registering and managing hosts with Satellite. This includes the Satellite Client 6 repositories. Table 1.1. Content Management Support Platform Architectures Red Hat Enterprise Linux 9 x86_64, ppc64le, s390x, aarch64 Red Hat Enterprise Linux 8 x86_64, ppc64le, s390x Red Hat Enterprise Linux 7 x86_64, ppc64 (BE), ppc64le, aarch64, s390x Red Hat Enterprise Linux 6 x86_64, i386, s390x, ppc64 (BE) 1.4.2. Host Provisioning Supported combinations of major versions of Red Hat Enterprise Linux and hardware architectures for host provisioning with Satellite. Table 1.2. Host Provisioning Support Platform Architectures Red Hat Enterprise Linux 9 x86_64 Red Hat Enterprise Linux 8 x86_64 Red Hat Enterprise Linux 7 x86_64 Red Hat Enterprise Linux 6 x86_64, i386 1.4.3. Configuration Management Supported combinations of major versions of Red Hat Enterprise Linux and hardware architectures for configuration management with Satellite. Table 1.3. Puppet Agent Support Platform Architectures Red Hat Enterprise Linux 9 x86_64 Red Hat Enterprise Linux 8 x86_64, aarch64 Red Hat Enterprise Linux 7 x86_64 Red Hat Enterprise Linux 6 x86_64, i386
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/Introduction_to_Server_planning
Release notes
Release notes Red Hat OpenStack Services on OpenShift 18.0 Release notes for the Red Hat OpenStack Services on OpenShift 18.0 release Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/release_notes/index
Chapter 10. Logging
Chapter 10. Logging 10.1. Configuring logging AMQ JavaScript uses the JavaScript debug module to implement logging. For example, to enable detailed client logging, set the DEBUG environment variable to rhea* : Example: Enabling detailed logging USD export DEBUG=rhea* USD <your-client-program> 10.2. Enabling protocol logging The client can log AMQP protocol frames to the console. This data is often critical when diagnosing problems. To enable protocol logging, set the DEBUG environment variable to rhea:frames : Example: Enabling protocol logging USD export DEBUG=rhea:frames USD <your-client-program>
[ "export DEBUG=rhea* <your-client-program>", "export DEBUG=rhea:frames <your-client-program>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_javascript_client/logging
Chapter 10. Configuring fencing in a Red Hat High Availability cluster
Chapter 10. Configuring fencing in a Red Hat High Availability cluster A node that is unresponsive may still be accessing data. The only way to be certain that your data is safe is to fence the node using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head" and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. For more complete general information about fencing and its importance in a Red Hat High Availability cluster, see the Red Hat Knowledgebase solution Fencing in a Red Hat High Availability Cluster . You implement STONITH in a Pacemaker cluster by configuring fence devices for the nodes of the cluster. 10.1. Displaying available fence agents and their options The following commands can be used to view available fencing agents and the available options for specific fencing agents. Note Your system's hardware determines the type of fencing device to use for your cluster. For information about supported platforms and architectures and the different fencing devices, see the Cluster Platforms and Architectures section of the article Support Policies for RHEL High Availability Clusters . Run the following command to list all available fencing agents. When you specify a filter, this command displays only the fencing agents that match the filter. Run the following command to display the options for the specified fencing agent. For example, the following command displays the options for the fence agent for APC over telnet/SSH. Warning For fence agents that provide a method option, with the exception of the fence_sbd agent a value of cycle is unsupported and should not be specified, as it may cause data corruption. Even for fence_sbd , however. you should not specify a method and instead use the default value. 10.2. Creating a fence device The format for the command to create a fence device is as follows. For a listing of the available fence device creation options, see the pcs stonith -h display. The following command creates a single fencing device for a single node. Some fence devices can fence only a single node, while other devices can fence multiple nodes. The parameters you specify when you create a fencing device depend on what your fencing device supports and requires. Some fence devices can automatically determine what nodes they can fence. You can use the pcmk_host_list parameter when creating a fencing device to specify all of the machines that are controlled by that fencing device. Some fence devices require a mapping of host names to the specifications that the fence device understands. You can map host names with the pcmk_host_map parameter when creating a fencing device. For information about the pcmk_host_list and pcmk_host_map parameters, see General properties of fencing devices . After configuring a fence device, it is imperative that you test the device to ensure that it is working correctly. For information about testing a fence device, see Testing a fence device . 10.3. General properties of fencing devices There are many general properties you can set for fencing devices, as well as various cluster properties that determine fencing behavior. Any cluster node can fence any other cluster node with any fence device, regardless of whether the fence resource is started or stopped. Whether the resource is started controls only the recurring monitor for the device, not whether it can be used, with the following exceptions: You can disable a fencing device by running the pcs stonith disable stonith_id command. This will prevent any node from using that device. To prevent a specific node from using a fencing device, you can configure location constraints for the fencing resource with the pcs constraint location ... avoids command. Configuring stonith-enabled=false will disable fencing altogether. Note, however, that Red Hat does not support clusters when fencing is disabled, as it is not suitable for a production environment. The following table describes the general properties you can set for fencing devices. Table 10.1. General Properties of Fencing Devices Field Type Default Description pcmk_host_map string A mapping of host names to port numbers for devices that do not support host names. For example: node1:1;node2:2,3 tells the cluster to use port 1 for node1 and ports 2 and 3 for node2. the pcmk_host_map property supports special characters inside pcmk_host_map values using a backslash in front of the value. For example, you can specify pcmk_host_map="node3:plug\ 1" to include a space in the host alias. pcmk_host_list string A list of machines controlled by this device (Optional unless pcmk_host_check=static-list ). pcmk_host_check string * static-list if either pcmk_host_list or pcmk_host_map is set * Otherwise, dynamic-list if the fence device supports the list action * Otherwise, status if the fence device supports the status action *Otherwise, none . How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device), static-list (check the pcmk_host_list attribute), none (assume every device can fence every machine) The following table summarizes additional properties you can set for fencing devices. Note that these properties are for advanced use only. Table 10.2. Advanced Properties of Fencing Devices Field Type Default Description pcmk_host_argument string port An alternate parameter to supply instead of port. Some devices do not support the standard port parameter or may provide additional ones. Use this to specify an alternate, device-specific parameter that should indicate the machine to be fenced. A value of none can be used to tell the cluster not to supply any additional parameters. pcmk_reboot_action string reboot An alternate command to run instead of reboot . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the reboot action. pcmk_reboot_timeout time 60s Specify an alternate timeout to use for reboot actions instead of stonith-timeout . Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for reboot actions. pcmk_reboot_retries integer 2 The maximum number of times to retry the reboot command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries reboot actions before giving up. pcmk_off_action string off An alternate command to run instead of off . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the off action. pcmk_off_timeout time 60s Specify an alternate timeout to use for off actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for off actions. pcmk_off_retries integer 2 The maximum number of times to retry the off command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries off actions before giving up. pcmk_list_action string list An alternate command to run instead of list . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the list action. pcmk_list_timeout time 60s Specify an alternate timeout to use for list actions. Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for list actions. pcmk_list_retries integer 2 The maximum number of times to retry the list command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries list actions before giving up. pcmk_monitor_action string monitor An alternate command to run instead of monitor . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the monitor action. pcmk_monitor_timeout time 60s Specify an alternate timeout to use for monitor actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for monitor actions. pcmk_monitor_retries integer 2 The maximum number of times to retry the monitor command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries monitor actions before giving up. pcmk_status_action string status An alternate command to run instead of status . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the status action. pcmk_status_timeout time 60s Specify an alternate timeout to use for status actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for status actions. pcmk_status_retries integer 2 The maximum number of times to retry the status command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries status actions before giving up. pcmk_delay_base string 0s Enables a base delay for fencing actions and specifies a base delay value. You can specify different values for different nodes with the pcmk_delay_base parameter. For general information about fencing delay parameters and their interactions, see Fencing delays . pcmk_delay_max time 0s Enables a random delay for fencing actions and specifies the maximum delay, which is the maximum value of the combined base delay and random delay. For example, if the base delay is 3 and pcmk_delay_max is 10, the random delay will be between 3 and 10. For general information about fencing delay parameters and their interactions, see Fencing delays . pcmk_action_limit integer 1 The maximum number of actions that can be performed in parallel on this device. The cluster property concurrent-fencing=true needs to be configured first (this is the default value). A value of -1 is unlimited. pcmk_on_action string on For advanced use only: An alternate command to run instead of on . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the on action. pcmk_on_timeout time 60s For advanced use only: Specify an alternate timeout to use for on actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for on actions. pcmk_on_retries integer 2 For advanced use only: The maximum number of times to retry the on command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries on actions before giving up. In addition to the properties you can set for individual fence devices, there are also cluster properties you can set that determine fencing behavior, as described in the following table. Table 10.3. Cluster Properties that Determine Fencing Behavior Option Default Description stonith-enabled true Indicates that failed nodes and nodes with resources that cannot be stopped should be fenced. Protecting your data requires that you set this true . If true , or unset, the cluster will refuse to start resources unless one or more STONITH resources have been configured also. Red Hat only supports clusters with this value set to true . stonith-action reboot Action to send to fencing device. Allowed values: reboot , off . The value poweroff is also allowed, but is only used for legacy devices. stonith-timeout 60s How long to wait for a STONITH action to complete. stonith-max-attempts 10 How many times fencing can fail for a target before the cluster will no longer immediately re-attempt it. stonith-watchdog-timeout The maximum time to wait until a node can be assumed to have been killed by the hardware watchdog. It is recommended that this value be set to twice the value of the hardware watchdog timeout. This option is needed only if watchdog-only SBD configuration is used for fencing. concurrent-fencing true Allow fencing operations to be performed in parallel. fence-reaction stop Determines how a cluster node should react if notified of its own fencing. A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that does not cut cluster communication. Allowed values are stop to attempt to immediately stop Pacemaker and stay stopped, or panic to attempt to immediately reboot the local node, falling back to stop on failure. Although the default value for this property is stop , the safest choice for this value is panic , which attempts to immediately reboot the local node. If you prefer the stop behavior, as is most likely to be the case in conjunction with fabric fencing, it is recommended that you set this explicitly. priority-fencing-delay 0 (disabled) Sets a fencing delay that allows you to configure a two-node cluster so that in a split-brain situation the node with the fewest or least important resources running is the node that gets fenced. For general information about fencing delay parameters and their interactions, see Fencing delays . For information about setting cluster properties, see Setting and removing cluster properties . 10.4. Fencing delays When cluster communication is lost in a two-node cluster, one node may detect this first and fence the other node. If both nodes detect this at the same time, however, each node may be able to initiate fencing of the other, leaving both nodes powered down or reset. By setting a fencing delay, you can decrease the likelihood of both cluster nodes fencing each other. You can set delays in a cluster with more than two nodes, but this is generally not of any benefit because only a partition with quorum will initiate fencing. You can set different types of fencing delays, depending on your system requirements. static fencing delays A static fencing delay is a fixed, predetermined delay. Setting a static delay on one node makes that node more likely to be fenced because it increases the chances that the other node will initiate fencing first after detecting lost communication. In an active/passive cluster, setting a delay on a passive node makes it more likely that the passive node will be fenced when communication breaks down. You configure a static delay by using the pcs_delay_base cluster property. You can set this property when a separate fence device is used for each node or when a single fence device is used for all nodes. dynamic fencing delays A dynamic fencing delay is random. It can vary and is determined at the time fencing is needed. You configure a random delay and specify a maximum value for the combined base delay and random delay with the pcs_delay_max cluster property. When the fencing delay for each node is random, which node is fenced is also random. You may find this feature useful if your cluster is configured with a single fence device for all nodes in an active/active design. priority fencing delays A priority fencing delay is based on active resource priorities. If all resources have the same priority, the node with the fewest resources running is the node that gets fenced. In most cases, you use only one delay-related parameter, but it is possible to combine them. Combining delay-related parameters adds the priority values for the resources together to create a total delay. You configure a priority fencing delay with the priority-fencing-delay cluster property. You may find this feature useful in an active/active cluster design because it can make the node running the fewest resources more likely to be fenced when communication between the nodes is lost. The pcmk_delay_base cluster property Setting the pcmk_delay_base cluster property enables a base delay for fencing and specifies a base delay value. When you set the pcmk_delay_max cluster property in addition to the pcmk_delay_base property, the overall delay is derived from a random delay value added to this static delay so that the sum is kept below the maximum delay. When you set pcmk_delay_base but do not set pcmk_delay_max , there is no random component to the delay and it will be the value of pcmk_delay_base . You can specify different values for different nodes with the pcmk_delay_base parameter. This allows a single fence device to be used in a two-node cluster, with a different delay for each node. You do not need to configure two separate devices to use separate delays. To specify different values for different nodes, you map the host names to the delay value for that node using a similar syntax to pcmk_host_map . For example, node1:0;node2:10s would use no delay when fencing node1 and a 10-second delay when fencing node2 . The pcmk_delay_max cluster property Setting the pcmk_delay_max cluster property enables a random delay for fencing actions and specifies the maximum delay, which is the maximum value of the combined base delay and random delay. For example, if the base delay is 3 and pcmk_delay_max is 10, the random delay will be between 3 and 10. When you set the pcmk_delay_base cluster property in addition to the pcmk_delay_max property, the overall delay is derived from a random delay value added to this static delay so that the sum is kept below the maximum delay. When you set pcmk_delay_max but do not set pcmk_delay_base there is no static component to the delay. The priority-fencing-delay cluster property Setting the priority-fencing-delay cluster property allows you to configure a two-node cluster so that in a split-brain situation the node with the fewest or least important resources running is the node that gets fenced. The priority-fencing-delay property can be set to a time duration. The default value for this property is 0 (disabled). If this property is set to a non-zero value, and the priority meta-attribute is configured for at least one resource, then in a split-brain situation the node with the highest combined priority of all resources running on it will be more likely to remain operational. For example, if you set pcs resource defaults update priority=1 and pcs property set priority-fencing-delay=15s and no other priorities are set, then the node running the most resources will be more likely to remain operational because the other node will wait 15 seconds before initiating fencing. If a particular resource is more important than the rest, you can give it a higher priority. The node running the promoted role of a promotable clone gets an extra 1 point if a priority has been configured for that clone. Interaction of fencing delays Setting more than one type of fencing delay yields the following results: Any delay set with the priority-fencing-delay property is added to any delay from the pcmk_delay_base and pcmk_delay_max fence device properties. This behavior allows some delay when both nodes have equal priority, or both nodes need to be fenced for some reason other than node loss, as when on-fail=fencing is set for a resource monitor operation. When setting these delays in combination, set the priority-fencing-delay property to a value that is significantly greater than the maximum delay from pcmk_delay_base and pcmk_delay_max to be sure the prioritized node is preferred. Setting this property to twice this value is always safe. Only fencing scheduled by Pacemaker itself observes fencing delays. Fencing scheduled by external code such as dlm_controld and fencing implemented by the pcs stonith fence command do not provide the necessary information to the fence device. Some individual fence agents implement a delay parameter, with a name determined by the agent and which is independent of delays configured with a pcmk_delay_ * property. If both of these delays are configured, they are added together and would generally not be used in conjunction. 10.5. Testing a fence device Fencing is a fundamental part of the Red Hat Cluster infrastructure and it is important to validate or test that fencing is working properly. Procedure Use the following procedure to test a fence device. Use ssh, telnet, HTTP, or whatever remote protocol is used to connect to the device to manually log in and test the fence device or see what output is given. For example, if you will be configuring fencing for an IPMI-enabled device,then try to log in remotely with ipmitool . Take note of the options used when logging in manually because those options might be needed when using the fencing agent. If you are unable to log in to the fence device, verify that the device is pingable, there is nothing such as a firewall configuration that is preventing access to the fence device, remote access is enabled on the fencing device, and the credentials are correct. Run the fence agent manually, using the fence agent script. This does not require that the cluster services are running, so you can perform this step before the device is configured in the cluster. This can ensure that the fence device is responding properly before proceeding. Note These examples use the fence_ipmilan fence agent script for an iLO device. The actual fence agent you will use and the command that calls that agent will depend on your server hardware. You should consult the man page for the fence agent you are using to determine which options to specify. You will usually need to know the login and password for the fence device and other information related to the fence device. The following example shows the format you would use to run the fence_ipmilan fence agent script with -o status parameter to check the status of the fence device interface on another node without actually fencing it. This allows you to test the device and get it working before attempting to reboot the node. When running this command, you specify the name and password of an iLO user that has power on and off permissions for the iLO device. The following example shows the format you would use to run the fence_ipmilan fence agent script with the -o reboot parameter. Running this command on one node reboots the node managed by this iLO device. If the fence agent failed to properly do a status, off, on, or reboot action, you should check the hardware, the configuration of the fence device, and the syntax of your commands. In addition, you can run the fence agent script with the debug output enabled. The debug output is useful for some fencing agents to see where in the sequence of events the fencing agent script is failing when logging into the fence device. When diagnosing a failure that has occurred, you should ensure that the options you specified when manually logging in to the fence device are identical to what you passed on to the fence agent with the fence agent script. For fence agents that support an encrypted connection, you may see an error due to certificate validation failing, requiring that you trust the host or that you use the fence agent's ssl-insecure parameter. Similarly, if SSL/TLS is disabled on the target device, you may need to account for this when setting the SSL parameters for the fence agent. Note If the fence agent that is being tested is a fence_drac , fence_ilo , or some other fencing agent for a systems management device that continues to fail, then fall back to trying fence_ipmilan . Most systems management cards support IPMI remote login and the only supported fencing agent is fence_ipmilan . Once the fence device has been configured in the cluster with the same options that worked manually and the cluster has been started, test fencing with the pcs stonith fence command from any node (or even multiple times from different nodes), as in the following example. The pcs stonith fence command reads the cluster configuration from the CIB and calls the fence agent as configured to execute the fence action. This verifies that the cluster configuration is correct. If the pcs stonith fence command works properly, that means the fencing configuration for the cluster should work when a fence event occurs. If the command fails, it means that cluster management cannot invoke the fence device through the configuration it has retrieved. Check for the following issues and update your cluster configuration as needed. Check your fence configuration. For example, if you have used a host map you should ensure that the system can find the node using the host name you have provided. Check whether the password and user name for the device include any special characters that could be misinterpreted by the bash shell. Making sure that you enter passwords and user names surrounded by quotation marks could address this issue. Check whether you can connect to the device using the exact IP address or host name you specified in the pcs stonith command. For example, if you give the host name in the stonith command but test by using the IP address, that is not a valid test. If the protocol that your fence device uses is accessible to you, use that protocol to try to connect to the device. For example many agents use ssh or telnet. You should try to connect to the device with the credentials you provided when configuring the device, to see if you get a valid prompt and can log in to the device. If you determine that all your parameters are appropriate but you still have trouble connecting to your fence device, you can check the logging on the fence device itself, if the device provides that, which will show if the user has connected and what command the user issued. You can also search through the /var/log/messages file for instances of stonith and error, which could give some idea of what is transpiring, but some agents can provide additional information. Once the fence device tests are working and the cluster is up and running, test an actual failure. To do this, take an action in the cluster that should initiate a token loss. Take down a network. How you take a network depends on your specific configuration. In many cases, you can physically pull the network or power cables out of the host. For information about simulating a network failure, see the Red Hat Knowledgebase solution What is the proper way to simulate a network failure on a RHEL Cluster? . Note Disabling the network interface on the local host rather than physically disconnecting the network or power cables is not recommended as a test of fencing because it does not accurately simulate a typical real-world failure. Block corosync traffic both inbound and outbound using the local firewall. The following example blocks corosync, assuming the default corosync port is used, firewalld is used as the local firewall, and the network interface used by corosync is in the default firewall zone: Simulate a crash and panic your machine with sysrq-trigger . Note, however, that triggering a kernel panic can cause data loss; it is recommended that you disable your cluster resources first. 10.6. Configuring fencing levels Pacemaker supports fencing nodes with multiple devices through a feature called fencing topologies. To implement topologies, create the individual devices as you normally would and then define one or more fencing levels in the fencing topology section in the configuration. Pacemaker processes fencing levels as follows: Each level is attempted in ascending numeric order, starting at 1. If a device fails, processing terminates for the current level. No further devices in that level are exercised and the level is attempted instead. If all devices are successfully fenced, then that level has succeeded and no other levels are tried. The operation is finished when a level has passed (success), or all levels have been attempted (failed). Use the following command to add a fencing level to a node. The devices are given as a comma-separated list of stonith ids, which are attempted for the node at that level. The following command lists all of the fencing levels that are currently configured. In the following example, there are two fence devices configured for node rh7-2 : an ilo fence device called my_ilo and an apc fence device called my_apc . These commands set up fence levels so that if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the device my_apc . This example also shows the output of the pcs stonith level command after the levels are configured. The following command removes the fence level for the specified node and devices. If no nodes or devices are specified then the fence level you specify is removed from all nodes. The following command clears the fence levels on the specified node or stonith id. If you do not specify a node or stonith id, all fence levels are cleared. If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the following example. The following command verifies that all fence devices and nodes specified in fence levels exist. You can specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For example, the following commands configure nodes node1 , node2 , and node3 to use fence devices apc1 and apc2 , and nodes node4 , node5 , and node6 to use fence devices apc3 and apc4 . The following commands yield the same results by using node attribute matching. 10.7. Configuring fencing for redundant power supplies When configuring fencing for redundant power supplies, the cluster must ensure that when attempting to reboot a host, both power supplies are turned off before either power supply is turned back on. If the node never completely loses power, the node may not release its resources. This opens up the possibility of nodes accessing these resources simultaneously and corrupting them. You need to define each device only once and to specify that both are required to fence the node, as in the following example. 10.8. Displaying configured fence devices The following command shows all currently configured fence devices. If a stonith_id is specified, the command shows the options for that configured fencing device only. If the --full option is specified, all configured fencing options are displayed. 10.9. Exporting fence devices as pcs commands As of Red Hat Enterprise Linux 9.1, you can display the pcs commands that can be used to re-create configured fence devices on a different system using the --output-format=cmd option of the pcs stonith config command. The following commands create a fence_apc_snmp fence device and display the pcs command you can use to re-create the device. 10.10. Modifying and deleting fence devices Modify or add options to a currently configured fencing device with the following command. Updating a SCSI fencing device with the pcs stonith update command causes a restart of all resources running on the same node where the fencing resource was running. You can use either version of the following command to update SCSI devices without causing a restart of other cluster resources. As of RHEL 9.1, SCSI fencing devices can be configured as multipath devices. Use the following command to remove a fencing device from the current configuration. 10.11. Manually fencing a cluster node You can fence a node manually with the following command. If you specify --off this will use the off API call to stonith which will turn the node off instead of rebooting it. In a situation where no fence device is able to fence a node even if it is no longer active, the cluster may not be able to recover the resources on the node. If this occurs, after manually ensuring that the node is powered down you can enter the following command to confirm to the cluster that the node is powered down and free its resources for recovery. Warning If the node you specify is not actually off, but running the cluster software or services normally controlled by the cluster, data corruption/cluster failure will occur. 10.12. Disabling a fence device To disable a fencing device/resource, run the pcs stonith disable command. The following command disables the fence device myapc . 10.13. Preventing a node from using a fencing device To prevent a specific node from using a fencing device, you can configure location constraints for the fencing resource. The following example prevents fence device node1-ipmi from running on node1 . 10.14. Configuring ACPI for use with integrated fence devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (see the note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. The preferred way to disable ACPI Soft-Off is to change the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay, as described in "Disabling ACPI Soft-Off with the Bios" below. Disabling ACPI Soft-Off with the BIOS may not be possible with some systems. If disabling ACPI Soft-Off with the BIOS is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Setting HandlePowerKey=ignore in the /etc/systemd/logind.conf file and verifying that the node node turns off immediately when fenced, as described in "Disabling ACPI Soft-Off in the logind.conf file", below. This is the first alternate method of disabling ACPI Soft-Off. Appending acpi=off to the kernel boot command line, as described in Disabling ACPI completely in the GRUB file below. This is the second alternate method of disabling ACPI Soft-Off, if the preferred or the first alternate method is not available. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. 10.14.1. Disabling ACPI Soft-Off with the BIOS You can disable ACPI Soft-Off by configuring the BIOS of each cluster node with the following procedure. Note The procedure for disabling ACPI Soft-Off with the BIOS may differ among server systems. You should verify this procedure with your hardware documentation. Procedure Reboot the node and start the BIOS CMOS Setup Utility program. Navigate to the Power menu (or equivalent power management menu). At the Power menu, set the Soft-Off by PWR-BTTN function (or equivalent) to Instant-Off (or the equivalent setting that turns off the node by means of the power button without delay). The BIOS CMOS Setup Utiliy example below shows a Power menu with ACPI Function set to Enabled and Soft-Off by PWR-BTTN set to Instant-Off . Note The equivalents to ACPI Function , Soft-Off by PWR-BTTN , and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off by means of the power button without delay. Exit the BIOS CMOS Setup Utility program, saving the BIOS configuration. Verify that the node turns off immediately when fenced. For information about testing a fence device, see Testing a fence device . BIOS CMOS Setup Utility : This example shows ACPI Function set to Enabled , and Soft-Off by PWR-BTTN set to Instant-Off . 10.14.2. Disabling ACPI Soft-Off in the logind.conf file To disable power-key handing in the /etc/systemd/logind.conf file, use the following procedure. Procedure Define the following configuration in the /etc/systemd/logind.conf file: Restart the systemd-logind service: Verify that the node turns off immediately when fenced. For information about testing a fence device, see Testing a fence device . 10.14.3. Disabling ACPI completely in the GRUB file You can disable ACPI Soft-Off by appending acpi=off to the GRUB menu entry for a kernel. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. Procedure Use the following procedure to disable ACPI in the GRUB file: Use the --args option in combination with the --update-kernel option of the grubby tool to change the grub.cfg file of each cluster node as follows: Reboot the node. Verify that the node turns off immediately when fenced. For information about testing a fence device, see Testing a fence device .
[ "pcs stonith list [ filter ]", "pcs stonith describe [ stonith_agent ]", "pcs stonith describe fence_apc Stonith options for: fence_apc ipaddr (required): IP Address or Hostname login (required): Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection port (required): Physical plug number or name of virtual machine identity_file: Identity file for ssh switch: Physical switch number on device inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device action (required): Fencing Action verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on", "pcs stonith create stonith_id stonith_device_type [ stonith_device_options ] [op operation_action operation_options ]", "pcs stonith create MyStonith fence_virt pcmk_host_list=f1 op monitor interval=30s", "fence_ipmilan -a ipaddress -l username -p password -o status", "fence_ipmilan -a ipaddress -l username -p password -o reboot", "fence_ipmilan -a ipaddress -l username -p password -o status -D /tmp/USD(hostname)-fence_agent.debug", "pcs stonith fence node_name", "firewall-cmd --direct --add-rule ipv4 filter OUTPUT 2 -p udp --dport=5405 -j DROP firewall-cmd --add-rich-rule='rule family=\"ipv4\" port port=\"5405\" protocol=\"udp\" drop", "echo c > /proc/sysrq-trigger", "pcs stonith level add level node devices", "pcs stonith level", "pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc", "pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]", "pcs stonith level clear [ node ]| stonith_id (s)]", "pcs stonith level clear dev_a,dev_b", "pcs stonith level verify", "pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4", "pcs stonith create apc1 fence_apc_snmp ipaddr=apc1.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith create apc2 fence_apc_snmp ipaddr=apc2.example.com login=user passwd='7a4D#1j!pz864' pcmk_host_map=\"node1.example.com:1;node2.example.com:2\" pcs stonith level add 1 node1.example.com apc1,apc2 pcs stonith level add 1 node2.example.com apc1,apc2", "pcs stonith config [ stonith_id ] [--full]", "pcs stonith create myapc fence_apc_snmp ip=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" username=\"apc\" password=\"apc\" pcs stonith config --output-format=cmd Warning: Only 'text' output format is supported for stonith levels pcs stonith create --no-default-ops --force -- myapc fence_apc_snmp ip=zapc.example.com password=apc 'pcmk_host_map=z1.example.com:1;z2.example.com:2' username=apc op monitor interval=60s id=myapc-monitor-interval-60s", "pcs stonith update stonith_id [ stonith_device_options ]", "pcs stonith update-scsi-devices stonith_id set device-path1 device-path2 pcs stonith update-scsi-devices stonith_id add device-path1 remove device-path2", "pcs stonith delete stonith_id", "pcs stonith fence node [--off]", "pcs stonith confirm node", "pcs stonith disable myapc", "pcs constraint location node1-ipmi avoids node1", "`Soft-Off by PWR-BTTN` set to `Instant-Off`", "+---------------------------------------------|-------------------+ | ACPI Function [Enabled] | Item Help | | ACPI Suspend Type [S1(POS)] |-------------------| | x Run VGABIOS if S3 Resume Auto | Menu Level * | | Suspend Mode [Disabled] | | | HDD Power Down [Disabled] | | | Soft-Off by PWR-BTTN [Instant-Off | | | CPU THRM-Throttling [50.0%] | | | Wake-Up by PCI card [Enabled] | | | Power On by Ring [Enabled] | | | Wake Up On LAN [Enabled] | | | x USB KB Wake-Up From S3 Disabled | | | Resume by Alarm [Disabled] | | | x Date(of Month) Alarm 0 | | | x Time(hh:mm:ss) Alarm 0 : 0 : | | | POWER ON Function [BUTTON ONLY | | | x KB Power ON Password Enter | | | x Hot Key Power ON Ctrl-F1 | | | | | | | | +---------------------------------------------|-------------------+", "HandlePowerKey=ignore", "systemctl restart systemd-logind.service", "grubby --args=acpi=off --update-kernel=ALL" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-fencing-configuring-and-managing-high-availability-clusters
29.2. Editing the GRUB Configuration
29.2. Editing the GRUB Configuration The GRUB boot loader uses the configuration file /boot/grub/grub.conf . To configure GRUB to boot from the new files, add a boot stanza to /boot/grub/grub.conf that refers to them. A minimal boot stanza looks like the following listing: You may wish to add options to the end of the kernel line of the boot stanza. These options set preliminary options in Anaconda which the user normally sets interactively. For a list of available installer boot options, refer to Chapter 28, Boot Options . The following options are generally useful for medialess installations: ip= repo= lang= keymap= ksdevice= (if installation requires an interface other than eth0) vnc and vncpassword= for a remote installation When you are finished, change the default option in /boot/grub/grub.conf to point to the new first stanza you added:
[ "title Installation root (hd0,0) kernel /vmlinuz-install initrd /initrd.img-install", "default 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-medialess-editing-grub-conf
8.168. ql2400-firmware
8.168. ql2400-firmware 8.168.1. RHBA-2013:1707 - ql2400-firmware bug fix and enhancement update An updated ql2400-firmware package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The ql2400-firmware package provides the firmware required to run the QLogic 2400 Series of mass storage adapters. Note The ql2400-firmware package has been upgraded to upstream version 7.00.01, which provides a number of bug fixes and enhancements over the version. (BZ# 996752 ) All users of QLogic 2400 Series Fibre Channel adapters are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/ql2400-firmware