title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 11. Related information | Chapter 11. Related information You can refer to the following instructional materials: Upgrade your Red Hat Enterprise Linux Infrastructure Red Hat Enterprise Linux technology capabilities and limits Supported in-place upgrade paths for Red Hat Enterprise Linux In-place upgrade Support Policy Considerations in adopting RHEL 8 Customizing your Red Hat Enterprise Linux in-place upgrade Automating your Red Hat Enterprise Linux pre-upgrade report workflow Using configuration management systems to automate parts of the Leapp pre-upgrade and upgrade process on Red Hat Enterprise Linux Upgrading from RHEL 6 to RHEL 7 Upgrading from RHEL 6 to RHEL 8 Converting from a Linux distribution to RHEL using the Convert2RHEL utility Upgrading Hosts from RHEL 7 to RHEL 8 in Red Hat Satellite How to in-place upgrade SAP environments from RHEL 7 to RHEL 8 (Red Hat Knowledgebase) Red Hat Insights Documentation Upgrades-related Knowledgebase articles and solutions (Red Hat Knowledgebase) The best practices and recommendations for performing RHEL Upgrade using Leapp Leapp upgrade FAQ (Frequently Asked Questions) Red Hat Enterprise Linux Upgrade Helper | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/ref_related-information_upgrading-from-rhel-7-to-rhel-8 |
Part II. Technology Previews | Part II. Technology Previews This part provides an overview of Technology Previews introduced or updated in Red Hat Enterprise Linux 7.2. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/technology-previews |
function::nsecs_to_string | function::nsecs_to_string Name function::nsecs_to_string - Human readable string for given nanoseconds Synopsis Arguments nsecs Number of nanoseconds to translate. Description Returns a string representing the number of nanoseconds as a human readable string consisting of " XmY.ZZZZZZs " , where X is the number of minutes, Y is the number of seconds and ZZZZZZZZZ is the number of nanoseconds. | [
"nsecs_to_string:string(nsecs:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nsecs-to-string |
Chapter 8. Red Hat Enterprise Linux CoreOS (RHCOS) | Chapter 8. Red Hat Enterprise Linux CoreOS (RHCOS) 8.1. About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the generation of single-purpose container operating system technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with automated, remote upgrade features. RHCOS is supported only as a component of OpenShift Container Platform 4.18 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.18: If you install your cluster on infrastructure that the installation program provisions, RHCOS images are downloaded to the target platform during installation. Suitable Ignition config files, which control the RHCOS configuration, are also downloaded and used to deploy the machines. If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines. 8.1.1. Key RHCOS features The following list describes key features of the RHCOS operating system: Based on RHEL : The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system. Controlled immutability : Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations. CRI-O container runtime : Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is the only engine available within OpenShift Container Platform clusters. CRI-O can use either the crun or runC container runtime to start and manage containers. crun is the default. For information about how to enable runC, see the documentation for creating a ContainerRuntimeConfig CR. Set of container tools : For tasks such as building, copying, and otherwise managing containers, RHCOS replaces the Docker CLI tool with a compatible set of container tools. The podman CLI tool supports many container runtime features, such as running, starting, stopping, listing, and removing containers and container images. The skopeo CLI tool can copy, authenticate, and sign images. You can use the crictl CLI tool to work with containers and pods from the CRI-O container engine. While direct use of these tools in RHCOS is discouraged, you can use them for debugging purposes. rpm-ostree upgrades : RHCOS features transactional upgrades using the rpm-ostree system. Updates are delivered by means of container images and are part of the OpenShift Container Platform update process. When deployed, the container image is pulled, extracted, and written to disk, then the bootloader is modified to boot into the new version. The machine will reboot into the update in a rolling manner to ensure cluster capacity is minimally impacted. bootupd firmware and bootloader updater : Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd , RHCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd , see the documentation for Updating the bootloader using bootupd . Updated through the Machine Config Operator : In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates. For RHCOS systems, the layout of the rpm-ostree file system has the following characteristics: /usr is where the operating system binaries and libraries are stored and is read-only. We do not support altering this. /etc , /boot , /var are writable on the system but only intended to be altered by the Machine Config Operator. /var/lib/containers is the graph storage location for storing container images. 8.1.2. Choosing how to configure RHCOS RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of: Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself. Supplying a few pieces of information, such as credentials and cluster name, in an install-config.yaml file when running openshift-install . Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways: Kubernetes workload objects, such as DaemonSet and Deployment : If you need to add services or other user-level features to your cluster, consider adding them as Kubernetes workload objects. Keeping those features outside of specific node configurations is the best way to reduce the risk of breaking the cluster on subsequent upgrades. Day-2 customizations : If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations. Day-1 customizations : For customizations that you must implement when the cluster first comes up, there are ways of modifying your cluster so changes are implemented on first boot. Day-1 customizations can be done through Ignition configs and manifest files during openshift-install or by adding boot options during ISO installs provisioned by the user. Here are examples of customizations you could do on day 1: Kernel arguments : If particular kernel features or tuning is needed on nodes when the cluster first boots. Disk encryption : If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support. Kernel modules : If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel. Chronyd : If you want to provide specific clock settings to your nodes, such as the location of time servers. To accomplish these tasks, you can augment the openshift-install process to include additional objects such as MachineConfig objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Note The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.1.3. Choosing how to deploy RHCOS Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user: Installer-provisioned : Some cloud environments offer preconfigured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots. User-provisioned : If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config. The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config. 8.1.4. About Ignition Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line identify the type of deployment and the location of the Ignition-enabled initial RAM disk (initramfs). 8.1.4.1. How Ignition works To create machines by using Ignition, you need Ignition config files. The OpenShift Container Platform installation program creates the Ignition config files that you need to deploy your cluster. These files are based on the information that you provide to the installation program directly or through an install-config.yaml file. The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences: Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration. Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration. After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot. Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially configured machine. If a machine setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date. If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines until the problem is resolved. If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth. Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch using features such as PXE boot. In the bare metal case, the Ignition config is injected into the boot partition so that Ignition can find it and configure the system correctly. 8.1.4.2. The Ignition sequence The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps: The machine gets its Ignition config file. Control plane machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a control plane machine. Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. Ignition mounts the root of the permanent file system to the /sysroot directory in the initramfs and starts working in that /sysroot directory. Ignition configures all defined file systems and sets them up to mount appropriately at runtime. Ignition runs systemd temporary files to populate required files in the /var directory. Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files. Ignition unmounts all components in the permanent system that were mounted in the initramfs. Ignition starts up the init process of the new machine, which in turn starts up all other services on the machine that run during system boot. At the end of this process, the machine is ready to join the cluster and does not require a reboot. 8.2. Viewing Ignition configuration files To see the Ignition config file used to deploy the bootstrap machine, run the following command: USD openshift-install create ignition-configs --dir USDHOME/testconfig After you answer a few questions, the bootstrap.ign , master.ign , and worker.ign files appear in the directory you entered. To see the contents of the bootstrap.ign file, pipe it through the jq filter. Here's a snippet from that file: USD cat USDHOME/testconfig/bootstrap.ign | jq { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==" } ], "mode": 420 }, ... To decode the contents of a file listed in the bootstrap.ign file, pipe the base64-encoded data string representing the contents of that file to the base64 -d command. Here's an example using the contents of the /etc/motd file added to the bootstrap machine from the output shown above: USD echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode Example output This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service Repeat those commands on the master.ign and worker.ign files to see the source of Ignition config files for each of those machine types. You should see a line like the following for the worker.ign , identifying how it gets its Ignition config from the bootstrap machine: "source": "https://api.myign.develcluster.example.com:22623/config/worker", Here are a few things you can learn from the bootstrap.ign file: Format: The format of the file is defined in the Ignition config spec . Files of the same format are used later by the MCO to merge changes into a machine's configuration. Contents: Because the bootstrap machine serves the Ignition configs for other machines, both master and worker machine Ignition config information is stored in the bootstrap.ign , along with the bootstrap machine's configuration. Size: The file is more than 1300 lines long, with path to various types of resources. The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.) Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine's file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up. users: A user named core is created, with your SSH key assigned to that user. This allows you to log in to the cluster with that user name and your credentials. storage: The storage section identifies files that are added to each machine. A few notable files include /root/.docker/config.json (which provides credentials your cluster needs to pull from container image registries) and a bunch of manifest files in /opt/openshift/manifests that are used to configure your cluster. systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems. Primitives: Ignition also exposes low-level primitives that other tools can build on. 8.3. Changing Ignition configs after installation Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False To list all machine configs: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config files, the last one wins. So, for example, any file that appears in a 99* file would replace the same file that appeared in a 00* file. The input MachineConfig objects are unioned into a "rendered" MachineConfig object, which will be used as a target by the operator and is the value you can see in the machine config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: USD oc describe machineconfigs 01-worker-container-runtime | grep Path: Example output Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster. | [
"openshift-install create ignition-configs --dir USDHOME/testconfig",
"cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },",
"echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode",
"This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service",
"\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",",
"USD oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m",
"oc describe machineconfigs 01-worker-container-runtime | grep Path:",
"Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/architecture-rhcos |
Chapter 8. Red Hat Enterprise Linux Atomic Host 7.7.3 | Chapter 8. Red Hat Enterprise Linux Atomic Host 7.7.3 8.1. Atomic Host OStree update : New Tree Version: 7.7.3 (hash: e0ac32316936b7e138a2f9bea407bf20124f34f519e8f7147df3edc69ca86296) Changes since Tree Version 7.7.2 (hash: 1542d075bce595cb38d3f1429388f3c5225732811d2995cb4a35c1be2cde00aa) 8.2. Extras Updated packages : docker-1.13.1-108.git4ef4b30.el7 8.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.7 Container Image (rhel7.7, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Universal Base Image 7 Container Image (rhel7/ubi7) Red Hat Universal Base Image 7 Init Container Image (rhel7/ubi7-init) Red Hat Universal Base Image 7 Minimal Container Image (rhel7/ubi7-minimal) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) 8.3. Announcements Red Hat Enterprise Linux Atomic Host Retires on August 6, 2020 Red Hat Enterprise Linux Atomic Host will be retired on August 6, 2020 and active support will no longer be provided. Accordingly, Red Hat will no longer provide image or ostree updates, Critical or Important security patches, or Urgent Priority bug fixes for Red Hat Enterprise Linux Atomic Host after August 6, 2020. We encourage customers to migrate to the most recent version of Red Hat Enterprise Linux that is supported for their environment. As a benefit of the Red Hat subscription model, customers can use their active subscriptions to entitle any system on any currently supported Red Hat Enterprise Linux release. Customers who wish to deploy containers as part of their production are encouraged to evaluate Red Hat OpenShift Container Platform migrate to a supported version of Red Hat Enterprise Linux. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_7_3 |
function::addr_to_node | function::addr_to_node Name function::addr_to_node - Returns which node a given address belongs to within a NUMA system Synopsis Arguments addr the address of the faulting memory access Description This function accepts an address, and returns the node that the given address belongs to in a NUMA system. | [
"addr_to_node:long(addr:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-addr-to-node |
Chapter 2. Planning an upgrade | Chapter 2. Planning an upgrade An in-place upgrade is the recommended and supported way to upgrade your SAP HANA system to the major version of RHEL. You should consider the following before upgrading to RHEL 8: Operating system: SAP HANA is installed with a version that is supported on both the source and target RHEL minor versions. SAP HANA is installed using the default installation path of /hana/shared . Public clouds: The in-place upgrade is supported for on-demand Pay-As-You-Go (PAYG) instances on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform with Red Hat Update Infrastructure (RHUI) . The in-place upgrade is also supported for Bring Your Own Subscription instances on all public clouds that use Red Hat Subscription Manager (RHSM) for a RHEL subscription. Additional Information: SAP HANA hosts must meet all of the following criteria: Running on x86_64 architecture that is certified by the hardware partner or CCSP for SAP HANA on the source and target OS versions. Running on physical infrastructure or in a virtual environment. Using the Red Hat Enterprise Linux for SAP Solutions subscription. Not using Red Hat HA Solutions for SAP HANA. SAP NetWeaver hosts must meet the following criteria: Using the Red Hat Enterprise Linux for SAP Solutions or Red Hat Enterprise Linux for SAP Applications subscription High Availability: If you are using the High Availability add-on, follow the Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster KBA. Please also refer to Chapter 2 and particularly the known limitations mentioned there in the Upgrading from RHEL 7 to RHEL 8 document, as these also apply to the upgrade procedure for SAP HANA and SAP NetWeaver hosts. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/upgrading_sap_environments_from_rhel_7_to_rhel_8/asmb_planning-upgrade_asmb_supported-upgrade-paths |
Chapter 4. Configuring OAuth clients | Chapter 4. Configuring OAuth clients Several OAuth clients are created by default in OpenShift Container Platform. You can also register and configure additional OAuth clients. 4.1. Default OAuth clients The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host 4.2. Registering an additional OAuth client If you need an additional OAuth client to manage authentication for your OpenShift Container Platform cluster, you can register one. Procedure To register additional OAuth clients: USD oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: "..." 2 redirectURIs: - "http://www.example.com/" 3 grantMethod: prompt 4 ') 1 The name of the OAuth client is used as the client_id parameter when making requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token . 2 The secret is used as the client_secret parameter when making requests to <namespace_route>/oauth/token . 3 The redirect_uri parameter specified in requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token must be equal to or prefixed by one of the URIs listed in the redirectURIs parameter value. 4 The grantMethod is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Specify auto to automatically approve the grant and retry the request, or prompt to prompt the user to approve or deny the grant. 4.3. Configuring token inactivity timeout for an OAuth client You can configure OAuth clients to expire OAuth tokens after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in the internal OAuth server configuration, the timeout that is set in the OAuth client overrides that value. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuthClient configuration to set a token inactivity timeout. Edit the OAuthClient object: USD oc edit oauthclient <oauth_client> 1 1 Replace <oauth_client> with the OAuth client to configure, for example, console . Add the accessTokenInactivityTimeoutSeconds field and set your timeout value: apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: ... accessTokenInactivityTimeoutSeconds: 600 1 1 The minimum allowed timeout value in seconds is 300 . Save the file to apply the changes. Verification Log in to the cluster with an identity from your IDP. Be sure to use the OAuth client that you just configured. Perform an action and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 600 seconds. Try to perform an action from the same identity's session. This attempt should fail because the token should have expired due to inactivity longer than the configured timeout. 4.4. Additional resources OAuthClient [oauth.openshift.io/v1 ] | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')",
"oc edit oauthclient <oauth_client> 1",
"apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/configuring-oauth-clients |
16.2. Finding Model Objects | 16.2. Finding Model Objects The Teiid Designer provides a name based search capability to quickly locate and display model objects. To find a model object: Open the Find Model Object dialog by selecting the action through the main menu's Search > Teiid Designer > Find Model Object action. Begin typing a word or partial word in the Type Object Name field. Wild card (*) characters will be honored. As you type, the objects which match the desired name will be displayed in the Matching Model Objects list. If there are more than one objects with the same name, the locations or paths of the objects are displayed in the Locations list. If more than one object exists with the desired name, select the one of the locations. Click OK . If editor is not open for the object's model, an editor will open. The desired object results in a diagram (if applicable) and selected. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/finding_model_objects |
Chapter 3. Managing resource servers | Chapter 3. Managing resource servers According to the OAuth2 specification, a resource server is a server hosting the protected resources and capable of accepting and responding to protected resource requests. In Red Hat build of Keycloak, resource servers are provided with a rich platform for enabling fine-grained authorization for their protected resources, where authorization decisions can be made based on different access control mechanisms. Any client application can be configured to support fine-grained permissions. In doing so, you are conceptually turning the client application into a resource server. 3.1. Creating a client application The first step to enable Red Hat build of Keycloak Authorization Services is to create the client application that you want to turn into a resource server. Procedure Click Clients . Clients On this page, click Create . Add Client Type the Client ID of the client. For example, my-resource-server . Type the Root URL for your application. For example: Click Save . The client is created and the client Settings page opens. A page similar to the following is displayed: Client Settings 3.2. Enabling authorization services You can turn your OIDC client into a resource server and enable fine-grained authorization. Procedure In the client settings page, scroll down to the Capability Config section. Toggle Authorization Enabled to On . Click Save . Enabling authorization services A new Authorization tab is displayed for this client. Click the Authorization tab and a page similar to the following is displayed: Resource server settings The Authorization tab contains additional sub-tabs covering the different steps that you must follow to actually protect your application's resources. Each tab is covered separately by a specific topic in this documentation. But here is a quick description about each one: Settings General settings for your resource server. For more details about this page see the Resource Server Settings section. Resource From this page, you can manage your application's resources . Authorization Scopes From this page, you can manage scopes . Policies From this page, you can manage authorization policies and define the conditions that must be met to grant a permission. Permissions From this page, you can manage the permissions for your protected resources and scopes by linking them with the policies you created. Evaluate From this page, you can simulate authorization requests and view the result of the evaluation of the permissions and authorization policies you have defined. Export Settings From this page, you can export the authorization settings to a JSON file. 3.2.1. Resource server settings On the Resource Server Settings page, you can configure the policy enforcement mode, allow remote resource management, and export the authorization configuration settings. Policy Enforcement Mode Specifies how policies are enforced when processing authorization requests sent to the server. Enforcing (default mode) Requests are denied by default even when there is no policy associated with a given resource. Permissive Requests are allowed even when there is no policy associated with a given resource. Disabled Disables the evaluation of all policies and allows access to all resources. Decision Strategy This configuration changes how the policy evaluation engine decides whether or not a resource or scope should be granted based on the outcome from all evaluated permissions. Affirmative means that at least one permission must evaluate to a positive decision in order grant access to a resource and its scopes. Unanimous means that all permissions must evaluate to a positive decision in order for the final decision to be also positive. As an example, if two permissions for a same resource or scope are in conflict (one of them is granting access and the other is denying access), the permission to the resource or scope will be granted if the chosen strategy is Affirmative . Otherwise, a single deny from any permission will also deny access to the resource or scope. Remote Resource Management Specifies whether resources can be managed remotely by the resource server. If false, resources can be managed only from the administration console. 3.3. Default Configuration When you create a resource server, Red Hat build of Keycloak creates a default configuration for your newly created resource server. The default configuration consists of: A default protected resource representing all resources in your application. A policy that always grants access to the resources protected by this policy. A permission that governs access to all resources based on the default policy. The default protected resource is referred to as the default resource and you can view it if you navigate to the Resources tab. Default resource This resource defines a Type , namely urn:my-resource-server:resources:default and a URI /* . Here, the URI field defines a wildcard pattern that indicates to Red Hat build of Keycloak that this resource represents all the paths in your application. In other words, when enabling policy enforcement for your application, all the permissions associated with the resource will be examined before granting access. The Type mentioned previously defines a value that can be used to create typed resource permissions that must be applied to the default resource or any other resource you create using the same type. The default policy is referred to as the only from realm policy and you can view it if you navigate to the Policies tab. Default policy This policy is a JavaScript-based policy defining a condition that always grants access to the resources protected by this policy. If you click this policy you can see that it defines a rule as follows: // by default, grants any permission associated with this policy USDevaluation.grant(); Lastly, the default permission is referred to as the default permission and you can view it if you navigate to the Permissions tab. Default Permission This permission is a resource-based permission , defining a set of one or more policies that are applied to all resources with a given type. 3.3.1. Changing the default configuration You can change the default configuration by removing the default resource, policy, or permission definitions and creating your own. The default resource is created with a URI that maps to any resource or path in your application using a / * pattern. Before creating your own resources, permissions and policies, make sure the default configuration doesn't conflict with your own settings. Note The default configuration defines a resource that maps to all paths in your application. If you are about to write permissions to your own resources, be sure to remove the Default Resource or change its URIS fields to more specific paths in your application. Otherwise, the policy associated with the default resource (which by default always grants access) will allow Red Hat build of Keycloak to grant access to any protected resource. 3.4. Export and import authorization configuration The configuration settings for a resource server (or client) can be exported and downloaded. You can also import an existing configuration file for a resource server. Importing and exporting a configuration file is helpful when you want to create an initial configuration for a resource server or to update an existing configuration. The configuration file contains definitions for: Protected resources and scopes Policies Permissions 3.4.1. Exporting a configuration file Procedure Click Clients in the menu. Click the client you created as a resource server. Click the Export tab. Export Settings The configuration file is exported in JSON format and displayed in a text area, from which you can copy and paste. You can also click Download to download the configuration file and save it. 3.4.2. Importing a configuration file You can import a configuration file for a resource server. Procedure Navigate to the Resource Server Settings page. Import Settings Click Import and choose a file containing the configuration that you want to import. | [
"http://USD{host}:USD{port}/my-resource-server",
"// by default, grants any permission associated with this policy USDevaluation.grant();"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/authorization_services_guide/resource_server_overview |
Chapter 1. Running Builds | Chapter 1. Running Builds After installing Builds, you can create a buildah or source-to-image build for use. You can also delete custom resources that are not required for a build. 1.1. Creating a buildah build You can create a buildah build and push the created image to the target registry. Prerequisites You have installed the Builds for Red Hat OpenShift Operator on the OpenShift Container Platform cluster. You have installed the oc CLI. Optional: You have installed the shp CLI . Procedure Create a Build resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: 1 type: Git git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build strategy: 2 name: buildah kind: ClusterBuildStrategy paramValues: 3 - name: dockerfile value: Dockerfile output: 4 image: image-registry.openshift-image-registry.svc:5000/buildah-example/sample-go-app EOF 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the dockerfile strategy parameter, specify the Dockerfile location required to build the output image. 4 The location where the built image is pushed. In this procedural example, the built image is pushed to the OpenShift Container Platform cluster internal registry. buildah-example is the name of the current project. Ensure that the specified project exists to allow the image push. Example: Using shp CLI USD shp build create buildah-golang-build \ --source-url="https://github.com/redhat-openshift-builds/samples" --source-context-dir="buildah-build" \ 1 --strategy-name="buildah" \ 2 --dockerfile="Dockerfile" \ 3 --output-image="image-registry.openshift-image-registry.svc:5000/buildah-example/go-app" 4 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the dockerfile strategy parameter, specify the Dockerfile location required to build the output image. 4 The location where the built image is pushed. In this procedural example, the built image is pushed to the OpenShift Container Platform cluster internal registry. buildah-example is the name of the current project. Ensure that the specified project exists to allow the image push. Check if the Build resource is created by using one of the CLIs: Example: Using oc CLI USD oc get builds.shipwright.io buildah-golang-build Example: Using shp CLI USD shp build list Create a BuildRun resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-golang-buildrun spec: build: name: buildah-golang-build 1 EOF 1 The spec.build.name field denotes the respective build to run, which is expected to be available in the same namespace. Example: Using shp CLI USD shp build run buildah-golang-build --follow 1 1 Optional: By using the --follow flag, you can view the build logs in the output result. Check if the BuildRun resource is created by running one of the following commands: Example: Using oc CLI USD oc get buildrun buildah-golang-buildrun Example: Using shp CLI USD shp buildrun list The BuildRun resource creates a TaskRun resource, which then creates the pods to execute build strategy steps. Verification After all the containers complete their tasks, verify the following: Check whether the pod shows the STATUS field as Completed : USD oc get pods -w Example output NAME READY STATUS RESTARTS AGE buildah-golang-buildrun-dtrg2-pod 2/2 Running 0 4s buildah-golang-buildrun-dtrg2-pod 1/2 NotReady 0 7s buildah-golang-buildrun-dtrg2-pod 0/2 Completed 0 55s Check whether the respective TaskRun resource shows the SUCCEEDED field as True : USD oc get tr Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun-dtrg2 True Succeeded 11m 8m51s Check whether the respective BuildRun resource shows the SUCCEEDED field as True : USD oc get br Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun True Succeeded 13m 11m During verification, if a build run fails, you can check the status.failureDetails field in your BuildRun resource to identify the exact point where the failure happened in the pod or container. Note The pod might switch to a NotReady state because one of the containers has completed its task. This is an expected behavior. Validate whether the image has been pushed to the registry that is specified in the build.spec.output.image field. You can try to pull the image by running the following command from a node that can access the internal registry: USD podman pull image-registry.openshift-image-registry.svc:5000/<project>/<image> 1 1 The project name and image name used when creating the Build resource. For example, you can use buildah-example as the project name and sample-go-app as the image name. 1.1.1. Creating buildah build in a network-restricted environment You can create a buildah build in a network-restricted environment by mirroring the images required by the buildah build strategy. Prerequisites Your cluster can connect and interact with the git source that you can use to create the buildah build. Procedure Run the following command to mirror the images required by the buildah build strategy: USD oc image mirror --insecure -a <registry_authentication> registry.redhat.io/ubi8/buildah@sha256:1c89cc3cab0ac0fc7387c1fe5e63443468219aab6fd531c8dad6d22fd999819e <mirror_registry>/<repo>/ubi8_buildah Perform the steps mentioned in the "Creating a buildah build" section. 1.2. Creating a source-to-image build You can create a source-to-image build and push the created image to a custom Quay repository. Prerequisites You have installed the Builds for Red Hat OpenShift Operator on the OpenShift Container Platform cluster. You have installed the oc CLI. Optional: You have installed the shp CLI . Procedure Create a Build resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: 1 type: Git type: Git git: url: https://github.com/redhat-openshift-builds/samples contextDir: s2i-build/nodejs strategy: 2 name: source-to-image kind: ClusterBuildStrategy paramValues: 3 - name: builder-image value: quay.io/centos7/nodejs-12-centos7:master output: image: quay.io/<repo>/s2i-nodejs-example 4 pushSecret: registry-credential 5 EOF 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the builder-image strategy parameter, specify the builder image location required to build the output image. 4 The location where the built image is pushed. You can push the built image to a custom Quay.io repository. Replace repo with a valid Quay.io organization or your Quay user name. 5 The secret name that stores the credentials for pushing container images. To generate a secret of the type docker-registry for authentication, see "Authentication to container registries". Example: Using shp CLI USD shp build create s2i-nodejs-build \ --source-url="https://github.com/redhat-openshift-builds/samples" --source-context-dir="s2i-build/nodejs" \ 1 --strategy-name="source-to-image" \ 2 --builder-image="quay.io/centos7/nodejs-12-centos7" \ 3 --output-image="quay.io/<repo>/s2i-nodejs-example" \ 4 --output-credentials-secret="registry-credential" 5 1 The location where the source code is placed. 2 The build strategy that you use to build the container. 3 The parameter defined in the build strategy. To set the value of the builder-image strategy parameter, specify the builder image location required to build the output image. 4 The location where the built image is pushed. You can push the built image to a custom Quay.io repository. Replace repo with a valid Quay.io organization or your Quay user name. 5 The secret name that stores the credentials for pushing container images. To generate a secret of the type docker-registry for authentication, see "Authentication to container registries". Check if the Build resource is created by using one of the CLIs: Example: Using oc CLI USD oc get builds.shipwright.io s2i-nodejs-build Example: Using shp CLI USD shp build list Create a BuildRun resource and apply it to the OpenShift Container Platform cluster by using one of the CLIs: Example: Using oc CLI USD oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: s2i-nodejs-buildrun spec: build: name: s2i-nodejs-build 1 EOF 1 The spec.build.name field denotes the respective build to run, which is expected to be available in the same namespace. Example: Using shp CLI USD shp build run s2i-nodejs-build --follow 1 1 Optional: By using the --follow flag, you can view the build logs in the output result. Check if the BuildRun resource is created by running one of the following commands: Example: Using oc CLI USD oc get buildrun s2i-nodejs-buildrun Example: Using shp CLI USD shp buildrun list The BuildRun resource creates a TaskRun resource, which then creates the pods to execute build strategy steps. Verification After all the containers complete their tasks, verify the following: Check whether the pod shows the STATUS field as Completed : USD oc get pods -w Example output NAME READY STATUS RESTARTS AGE s2i-nodejs-buildrun-phxxm-pod 2/2 Running 0 10s s2i-nodejs-buildrun-phxxm-pod 1/2 NotReady 0 14s s2i-nodejs-buildrun-phxxm-pod 0/2 Completed 0 2m Check whether the respective TaskRun resource shows the SUCCEEDED field as True : USD oc get tr Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun-phxxm True Succeeded 2m39s 13s Check whether the respective BuildRun resource shows the SUCCEEDED field as True : USD oc get br Example output NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun True Succeeded 2m41s 15s During verification, if a build run fails, you can check the status.failureDetails field in your BuildRun resource to identify the exact point where the failure happened in the pod or container. Note The pod might switch to a NotReady state because one of the containers has completed its task. This is an expected behavior. Validate whether the image has been pushed to the registry that is specified in the build.spec.output.image field. You can try to pull the image by running the following command after logging in to the registry: USD podman pull quay.io/<repo>/<image> 1 1 The repository name and image name used when creating the Build resource. For example, you can use s2i-nodejs-example as the image name. Additional resources Authentication to container registries 1.2.1. Creating source-to-image build in a network-restricted environment You can create a source-to-image build in a network-restricted environment by mirroring the images required by the source-to-image build strategy. Prerequisites Your cluster can connect and interact with the git source that you can use to create the source-to-image build. You have the builder-image required to create the source-to-image build in your local registry. If you do not have the builder-image in the local registry, mirror the source image. Procedure Run the following command to mirror the images required by the source-to-image build strategy: USD oc image mirror --insecure -a <registry_authentication> registry.redhat.io/source-to-image/source-to-image-rhel8@sha256:d041c1bbe503d152d0759598f79802e257816d674b342670ef61c6f9e6d401c5 <mirror_registry>/<repo>/source-to-image-source-to-image-rhel8 Perform the steps mentioned in the "Creating a source-to-image build" section. 1.3. Viewing logs You can view the logs of a build run to identify any runtime errors and to resolve them. Prerequisites You have installed the oc CLI. Optional: You have installed the shp CLI. Procedure View logs of a build run by using one of the CLIs: Using oc CLI USD oc logs <buildrun_resource_name> Using shp CLI USD shp buildrun logs <buildrun_resource_name> 1.4. Deleting a resource You can delete a Build , BuildRun , or BuildStrategy resource if it is not required in your project. Prerequisites You have installed the oc CLI. Optional: You have installed the shp CLI. Procedure Delete a Build resource by using one of the CLIs: Using oc CLI USD oc delete builds.shipwright.io <build_resource_name> Using shp CLI USD shp build delete <build_resource_name> Delete a BuildRun resource by using one of the CLIs: Using oc CLI USD oc delete buildrun <buildrun_resource_name> Using shp CLI USD shp buildrun delete <buildrun_resource_name> Delete a BuildStrategy resource by running the following command: Using oc CLI USD oc delete buildstrategies <buildstartegy_resource_name> 1.5. Additional resources Authentication to container registries Creating a ShipwrightBuild resource by using the web console Mirroring images for a disconnected installation by using the oc adm command | [
"oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: 1 type: Git git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build strategy: 2 name: buildah kind: ClusterBuildStrategy paramValues: 3 - name: dockerfile value: Dockerfile output: 4 image: image-registry.openshift-image-registry.svc:5000/buildah-example/sample-go-app EOF",
"shp build create buildah-golang-build --source-url=\"https://github.com/redhat-openshift-builds/samples\" --source-context-dir=\"buildah-build\" \\ 1 --strategy-name=\"buildah\" \\ 2 --dockerfile=\"Dockerfile\" \\ 3 --output-image=\"image-registry.openshift-image-registry.svc:5000/buildah-example/go-app\" 4",
"oc get builds.shipwright.io buildah-golang-build",
"shp build list",
"oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-golang-buildrun spec: build: name: buildah-golang-build 1 EOF",
"shp build run buildah-golang-build --follow 1",
"oc get buildrun buildah-golang-buildrun",
"shp buildrun list",
"oc get pods -w",
"NAME READY STATUS RESTARTS AGE buildah-golang-buildrun-dtrg2-pod 2/2 Running 0 4s buildah-golang-buildrun-dtrg2-pod 1/2 NotReady 0 7s buildah-golang-buildrun-dtrg2-pod 0/2 Completed 0 55s",
"oc get tr",
"NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun-dtrg2 True Succeeded 11m 8m51s",
"oc get br",
"NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-golang-buildrun True Succeeded 13m 11m",
"podman pull image-registry.openshift-image-registry.svc:5000/<project>/<image> 1",
"oc image mirror --insecure -a <registry_authentication> registry.redhat.io/ubi8/buildah@sha256:1c89cc3cab0ac0fc7387c1fe5e63443468219aab6fd531c8dad6d22fd999819e <mirror_registry>/<repo>/ubi8_buildah",
"oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: 1 type: Git type: Git git: url: https://github.com/redhat-openshift-builds/samples contextDir: s2i-build/nodejs strategy: 2 name: source-to-image kind: ClusterBuildStrategy paramValues: 3 - name: builder-image value: quay.io/centos7/nodejs-12-centos7:master output: image: quay.io/<repo>/s2i-nodejs-example 4 pushSecret: registry-credential 5 EOF",
"shp build create s2i-nodejs-build --source-url=\"https://github.com/redhat-openshift-builds/samples\" --source-context-dir=\"s2i-build/nodejs\" \\ 1 --strategy-name=\"source-to-image\" \\ 2 --builder-image=\"quay.io/centos7/nodejs-12-centos7\" \\ 3 --output-image=\"quay.io/<repo>/s2i-nodejs-example\" \\ 4 --output-credentials-secret=\"registry-credential\" 5",
"oc get builds.shipwright.io s2i-nodejs-build",
"shp build list",
"oc apply -f - <<EOF apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: s2i-nodejs-buildrun spec: build: name: s2i-nodejs-build 1 EOF",
"shp build run s2i-nodejs-build --follow 1",
"oc get buildrun s2i-nodejs-buildrun",
"shp buildrun list",
"oc get pods -w",
"NAME READY STATUS RESTARTS AGE s2i-nodejs-buildrun-phxxm-pod 2/2 Running 0 10s s2i-nodejs-buildrun-phxxm-pod 1/2 NotReady 0 14s s2i-nodejs-buildrun-phxxm-pod 0/2 Completed 0 2m",
"oc get tr",
"NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun-phxxm True Succeeded 2m39s 13s",
"oc get br",
"NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME s2i-nodejs-buildrun True Succeeded 2m41s 15s",
"podman pull quay.io/<repo>/<image> 1",
"oc image mirror --insecure -a <registry_authentication> registry.redhat.io/source-to-image/source-to-image-rhel8@sha256:d041c1bbe503d152d0759598f79802e257816d674b342670ef61c6f9e6d401c5 <mirror_registry>/<repo>/source-to-image-source-to-image-rhel8",
"oc logs <buildrun_resource_name>",
"shp buildrun logs <buildrun_resource_name>",
"oc delete builds.shipwright.io <build_resource_name>",
"shp build delete <build_resource_name>",
"oc delete buildrun <buildrun_resource_name>",
"shp buildrun delete <buildrun_resource_name>",
"oc delete buildstrategies <buildstartegy_resource_name>"
]
| https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html/work_with_builds/running-builds |
8.14. augeas | 8.14. augeas 8.14.1. RHBA-2014:1517 - augeas bug fix and enhancement update Updated augeas packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. Augeas is a utility for editing configuration. Augeas parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native configuration files. Augeas also uses "lenses" as basic building blocks for establishing the mapping from files into the Augeas tree and back. Bug Fixes BZ# 1001635 , BZ# 1059383 Previously, the Grub lens did not support "setkey", "lock", and "foreground" directives, which caused virt-v2v process to terminate unexpectedly with an error message. The bug has been fixed, and virt-v2v process no longer fails. BZ# 1093711 , BZ# 1016904 When sudo configuration files, sudoers, contained directives with user aliases or group names using underscore characters, the sudoers lens was unable to parse the configuration file. With this update, underscores have been permitted in group names, and the sudoers lens now parses files successfully. BZ# 1033795 Prior to this update, when shell configuration files contained "export" lines with multiple variables or case statements with two semicolons (;;) on the same line as an expression, Augeas was unable to parse these files. With this update, Augeas handles multiple variables on the same export line and case statements with two semicolons as expected, and the aforementioned files are successfully parsed. BZ# 1043636 Previously, when the sysconfig lens was used to parse a shell configuration file containing a blank comment after another comment, the parsing process failed. The lens has been fixed to parse this combination of comments, and parsing is now finished successfully. BZ# 1062091 When parsing yum configuration files with spaces around key or value separators, Augeas was unable to parse the files. The underlying source code has been fixed, and yum configuration files are now parsed successfully. BZ# 1073072 Prior to this update, no generic lens existed for parsing the INI-style files, and parsing thus failed with an error message. The IniFile module has been fixed to contain generic lenses, and INI-style files are now parsed as intended. BZ# 1075112 When automounter maps contained references to hosts with host names containing hyphens, the automounter lens failed to parse the /etc/auto.export configuration file. A patch has been provided to fix this bug, and /etc/auto.export is now parsed as expected. BZ# 1083016 Prior to this update, the default rsyslog configuration file provided in Red Hat Enterprise Linux failed to parse using Augeas. The rsyslog lens has been fixed to parse the filters and templates used, and /etc/rsyslog.conf is now parsed successfully. BZ# 1100237 When Nagios Remote Plugin Executor (NRPE) configuration files contained the "allow_bash_command_substitution" option, the NRPE lens was unable to parse the files. A patch has been provided to fix this bug, and files with "allow_bash_command_substitution" are now parsed as intended. In addition, this update adds the following Enhancements BZ# 1016899 With this update, lenses have been added to parse configuration files relating to Red Hat JBoss A-MQ, including ActiveMQ configurations, ActiveMQ XML files, Jetty configuration, and JMX access files. BZ# 1016900 A new lens has been added to parse the Splunk configuration files, and thus the user can now manage Splunk configuration through the Puppet module. Users of augeas are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/augeas |
Chapter 2. Concepts for multi-site deployments | Chapter 2. Concepts for multi-site deployments This topic describes a highly available multi-site setup and the behavior to expect. It outlines the requirements of the high availability architecture and describes the benefits and tradeoffs. 2.1. When to use this setup Use this setup to provide Red Hat build of Keycloak deployments that are able to tolerate site failures, reducing the likelihood of downtime. 2.2. Deployment, data storage and caching Two independent Red Hat build of Keycloak deployments running in different sites are connected with a low latency network connection. Users, realms, clients, sessions, and other entities are stored in a database that is replicated synchronously across the two sites. The data is also cached in the Red Hat build of Keycloak Infinispan caches as local caches. When the data is changed in one Red Hat build of Keycloak instance, that data is updated in the database, and an invalidation message is sent to the other site using the work cache. In the following paragraphs and diagrams, references to deploying Data Grid apply to the external Data Grid. 2.3. Causes of data and service loss While this setup aims for high availability, the following situations can still lead to service or data loss: Red Hat build of Keycloak site failure may result in requests failing in the period between the failure and the loadbalancer detecting it, as requests may still be routed to the failed site. Once failures occur in the communication between the sites, manual steps are necessary to re-synchronize a degraded setup. Degraded setups can lead to service or data loss if additional components fail. Monitoring is necessary to detect degraded setups. 2.4. Failures which this setup can survive Failure Recovery RPO 1 RTO 2 Database node If the writer instance fails, the database can promote a reader instance in the same or other site to be the new writer. No data loss Seconds to minutes (depending on the database) Red Hat build of Keycloak node Multiple Red Hat build of Keycloak instances run on each site. If one instance fails some incoming requests might receive an error message or are delayed for some seconds. No data loss Less than 30 seconds Data Grid node Multiple Data Grid instances run in each site. If one instance fails, it takes a few seconds for the other nodes to notice the change. Entities are stored in at least two Data Grid nodes, so a single node failure does not lead to data loss. No data loss Less than 30 seconds Data Grid cluster failure If the Data Grid cluster fails in one of the sites, Red Hat build of Keycloak will not be able to communicate with the external Data Grid on that site, and the Red Hat build of Keycloak service will be unavailable. The loadbalancer will detect the situation as /lb-check returns an error, and will direct all traffic to the other site. The setup is degraded until the Data Grid cluster is restored and the data is re-synchronized. No data loss 3 Seconds to minutes (depending on load balancer setup) Connectivity Data Grid If the connectivity between the two sites is lost, data cannot be sent to the other site. Incoming requests might receive an error message or are delayed for some seconds. The Data Grid will mark the other site offline, and will stop sending data. One of the sites needs to be taken offline in the loadbalancer until the connection is restored and the data is re-synchronized between the two sites. In the blueprints, we show how this can be automated. No data loss 3 Seconds to minutes (depending on load balancer setup) Connectivity database If the connectivity between the two sites is lost, the synchronous replication will fail. Some requests might receive an error message or be delayed for a few seconds. Manual operations might be necessary depending on the database. No data loss 3 Seconds to minutes (depending on the database) Site failure If none of the Red Hat build of Keycloak nodes are available, the loadbalancer will detect the outage and redirect the traffic to the other site. Some requests might receive an error message until the loadbalancer detects the failure. No data loss 3 Less than two minutes Table footnotes: 1 Recovery point objective, assuming all parts of the setup were healthy at the time this occurred. 2 Recovery time objective. 3 Manual operations needed to restore the degraded setup. The statement "No data loss" depends on the setup not being degraded from failures, which includes completing any pending manual operations to resynchronize the state between the sites. 2.5. Known limitations Site Failure A successful failover requires a setup not degraded from failures. All manual operations like a re-synchronization after a failure must be complete to prevent data loss. Use monitoring to ensure degradations are detected and handled in a timely manner. Out-of-sync sites The sites can become out of sync when a synchronous Data Grid request fails. This situation is currently difficult to monitor, and it would need a full manual re-sync of Data Grid to recover. Monitoring the number of cache entries in both sites and the Red Hat build of Keycloak log file can show when resynch would become necessary. Manual operations Manual operations that re-synchronize the Data Grid state between the sites will issue a full state transfer which will put a stress on the system. Two sites restriction This setup is tested and supported only with two sites. Each additional site increases overall latency as it is necessary for data to be synchronously written to each site. Furthermore, the probability of network failures, and therefore downtime, also increases. Therefore, we do not support more than two sites as we believe it would lead to a deployment with inferior stability and performance. 2.6. Questions and answers Why synchronous database replication? A synchronously replicated database ensures that data written in one site is always available in the other site after site failures and no data is lost. It also ensures that the request will not return stale data, independent on which site it is served. Why synchronous Data Grid replication? A synchronously replicated Data Grid ensures that cached data in one site are always available on the other site after a site failure and no data is lost. It also ensures that the request will not return stale data, independent on which site it is served. Why is a low-latency network between sites needed? Synchronous replication defers the response to the caller until the data is received at the other site. For synchronous database replication and synchronous Data Grid replication, a low latency is necessary as each request can have potentially multiple interactions between the sites when data is updated which would amplify the latency. Is a synchronous cluster less stable than an asynchronous cluster? An asynchronous setup would handle network failures between the sites gracefully, while the synchronous setup would delay requests and will throw errors to the caller where the asynchronous setup would have deferred the writes to Data Grid or the database on the other site. However, as the two sites would never be fully up-to-date, this setup could lead to data loss during failures. This would include: Lost changes leading to users being able to log in with an old password because database changes are not replicated to the other site at the point of failure when using an asynchronous database. Invalid caches leading to users being able to log in with an old password because invalidating caches are not propagated at the point of failure to the other site when using an asynchronous Data Grid replication. Therefore, tradeoffs exist between high availability and consistency. The focus of this topic is to prioritize consistency over availability with Red Hat build of Keycloak. 2.7. steps Continue reading in the Building blocks multi-site deployments chapter to find blueprints for the different building blocks. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/high_availability_guide/concepts-multi-site- |
Chapter 10. Configuring a proxy for external network access | Chapter 10. Configuring a proxy for external network access If your network configuration restricts outbound traffic through proxies, you can configure proxy settings in Red Hat Advanced Cluster Security for Kubernetes to route traffic through a proxy. When you use a proxy with Red Hat Advanced Cluster Security for Kubernetes: All outgoing HTTP, HTTPS, and other TCP traffic from Central and Scanner goes through the proxy. Traffic between Central and Scanner does not go through the proxy. The proxy configuration does not affect the other Red Hat Advanced Cluster Security for Kubernetes components. When you are not using the offline mode, and a Collector running in a secured cluster needs to download an additional eBPF probe at runtime: The collector attempts to download them by contacting Sensor. The Sensor then forwards this request to Central. Central uses the proxy to locate the module or probe at https://collector-modules.stackrox.io . 10.1. Configuring a proxy on an existing deployment To configure a proxy in an existing deployment, you must export the proxy-config secret as a YAML file, update your proxy configuration in that file, and upload it as a secret. Note If you have configured a global proxy on your OpenShift Container Platform cluster, the Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom certificate authority (CA) certificate. For more information, see Configuring proxy support in Operator Lifecycle Manager . Procedure Save the existing secret as a YAML file: USD oc -n stackrox get secret proxy-config \ -o go-template='{{index .data "config.yaml" | \ base64decode}}{{"\n"}}' > /tmp/proxy-config.yaml Edit the fields you want to modify in the YAML configuration file, as specified in the Configure proxy during installation section. After you save the changes, run the following command to replace the secret: USD oc -n stackrox create secret generic proxy-config \ --from-file=config.yaml=/tmp/proxy-config.yaml -o yaml --dry-run | \ oc label -f - --local -o yaml app.kubernetes.io/name=stackrox | \ oc apply -f - Important You must wait for at least 1 minute, until OpenShift Container Platform propagates your changes to Central and Scanner. If you see any issues with outgoing connections after changing the proxy configuration, you must restart your Central and Scanner pods. 10.2. Configuring a proxy during installation When you are installing Red Hat Advanced Cluster Security for Kubernetes by using the roxctl command-line interface (CLI) or Helm, you can specify your proxy configuration during the installation. When you run the installer by using the roxctl central generate command, the installer generates the secrets and deployment configuration files for your environment. You can configure a proxy by editing the generated configuration secret (YAML) file. Currently, you cannot configure proxies by using the roxctl CLI. The configuration is stored in a Kubernetes secret and it is shared by both Central and Scanner. Procedure Open the configuration file central/proxy-config-secret.yaml from your deployment bundle directory. Note If you are using Helm the configuration file is at central/templates/proxy-config-secret.yaml . Edit the fields you want to modify in the configuration file: apiVersion: v1 kind: Secret metadata: namespace: stackrox name: proxy-config type: Opaque stringData: config.yaml: |- 1 # # NOTE: Both central and scanner should be restarted if this secret is changed. # # While it is possible that some components will pick up the new proxy configuration # # without a restart, it cannot be guaranteed that this will apply to every possible # # integration etc. # url: http://proxy.name:port 2 # username: username 3 # password: password 4 # # If the following value is set to true, the proxy wil NOT be excluded for the default hosts: # # - *.stackrox, *.stackrox.svc # # - localhost, localhost.localdomain, 127.0.0.0/8, ::1 # # - *.local # omitDefaultExcludes: false # excludes: # hostnames (may include * components) for which you do not 5 # # want to use a proxy, like in-cluster repositories. # - some.domain # # The following configuration sections allow specifying a different proxy to be used for HTTP(S) connections. # # If they are omitted, the above configuration is used for HTTP(S) connections as well as TCP connections. # # If only the `http` section is given, it will be used for HTTPS connections as well. # # Note: in most cases, a single, global proxy configuration is sufficient. # http: # url: http://http-proxy.name:port 6 # username: username 7 # password: password 8 # https: # url: http://https-proxy.name:port 9 # username: username 10 # password: password 11 3 4 7 8 10 11 Adding a username and a password is optional, both at the beginning and in the http and https sections. 2 6 9 The url option supports the following URL schemes: http:// for an HTTP proxy. https:// for a TLS-enabled HTTP proxy. socks5:// for a SOCKS5 proxy. 5 The excludes list can contain DNS names (with or without * wildcards), IP addresses, or IP blocks in CIDR notation (for example, 10.0.0.0/8 ). The values in this list are applied to all outgoing connections, regardless of protocol. 1 The |- line in the stringData section indicates the start of the configuration data. Note When you first open the file, all values are commented out (by using the # sign at the beginning of the line). Lines starting with double hash signs # # contain explanation of the configuration keys. Make sure that when you edit the fields, you maintain an indentation level of two spaces relative to the config.yaml: |- line. After editing the configuration file, you can proceed with your usual installation. The updated configuration instructs Red Hat Advanced Cluster Security for Kubernetes to use the proxy running on the provided address and the port number. | [
"oc -n stackrox get secret proxy-config -o go-template='{{index .data \"config.yaml\" | base64decode}}{{\"\\n\"}}' > /tmp/proxy-config.yaml",
"oc -n stackrox create secret generic proxy-config --from-file=config.yaml=/tmp/proxy-config.yaml -o yaml --dry-run | oc label -f - --local -o yaml app.kubernetes.io/name=stackrox | oc apply -f -",
"apiVersion: v1 kind: Secret metadata: namespace: stackrox name: proxy-config type: Opaque stringData: config.yaml: |- 1 # # NOTE: Both central and scanner should be restarted if this secret is changed. # # While it is possible that some components will pick up the new proxy configuration # # without a restart, it cannot be guaranteed that this will apply to every possible # # integration etc. # url: http://proxy.name:port 2 # username: username 3 # password: password 4 # # If the following value is set to true, the proxy wil NOT be excluded for the default hosts: # # - *.stackrox, *.stackrox.svc # # - localhost, localhost.localdomain, 127.0.0.0/8, ::1 # # - *.local # omitDefaultExcludes: false # excludes: # hostnames (may include * components) for which you do not 5 # # want to use a proxy, like in-cluster repositories. # - some.domain # # The following configuration sections allow specifying a different proxy to be used for HTTP(S) connections. # # If they are omitted, the above configuration is used for HTTP(S) connections as well as TCP connections. # # If only the `http` section is given, it will be used for HTTPS connections as well. # # Note: in most cases, a single, global proxy configuration is sufficient. # http: # url: http://http-proxy.name:port 6 # username: username 7 # password: password 8 # https: # url: http://https-proxy.name:port 9 # username: username 10 # password: password 11"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/configuring/configure-proxy |
function::gettimeofday_ns | function::gettimeofday_ns Name function::gettimeofday_ns - Number of nanoseconds since UNIX epoch. Synopsis Arguments None General Syntax gettimeofday_ns: long Description This function returns the number of nanoseconds since the UNIX epoch. | [
"function gettimeofday_ns:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-gettimeofday-ns |
Virtualization Getting Started Guide | Virtualization Getting Started Guide Red Hat Enterprise Linux 6 An introduction to virtualization concepts Jiri Herrmann Red Hat Customer Content Services [email protected] Yehuda Zimmerman Red Hat Customer Content Services [email protected] Dayle Parker Red Hat Customer Content Services Laura Novich Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services Scott Radvan Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/index |
3.13. Cluster Networking | 3.13. Cluster Networking Cluster level networking objects include: Clusters Logical Networks A data center is a logical grouping of multiple clusters and each cluster is a logical group of multiple hosts. The following diagram depicts the contents of a single cluster. Figure 3.1. Networking within a cluster Hosts in a cluster all have access to the same storage domains. Hosts in a cluster also have logical networks applied to the cluster. For a virtual machine logical network to become operational for use with virtual machines, the network must be defined and implemented for each host in the cluster using the Red Hat Virtualization Manager. Other logical network types can be implemented on only the hosts that use them. Multi-host network configuration automatically applies any updated network settings to all of the hosts within the data center to which the network is assigned. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/Cluster_Networking |
2.2. Tracking Configuration History | 2.2. Tracking Configuration History Data from the Red Hat Virtualization History Database (called ovirt_engine_history ) can be used to track the engine database. The ETL service, ovirt-engine-dwhd , tracks three types of changes: A new entity is added to the engine database - the ETL Service replicates the change to the ovirt_engine_history database as a new entry. An existing entity is updated - the ETL Service replicates the change to the ovirt_engine_history database as a new entry. An entity is removed from the engine database - A new entry in the ovirt_engine_history database flags the corresponding entity as removed. Removed entities are only flagged as removed. The configuration tables in the ovirt_engine_history database differ from the corresponding tables in the engine database in several ways. The most apparent difference is they contain fewer configuration columns. This is because certain configuration items are less interesting to report than others and are not kept due to database size considerations. Also, columns from a few tables in the engine database appear in a single table in ovirt_engine_history and have different column names to make viewing data more convenient and comprehensible. All configuration tables contain: a history_id to indicate the configuration version of the entity; a create_date field to indicate when the entity was added to the system; an update_date field to indicate when the entity was changed; and a delete_date field to indicate the date the entity was removed from the system. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/tracking_configuration_history |
Chapter 10. ServiceAccount [v1] | Chapter 10. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level. imagePullSecrets array ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata secrets array Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret secrets[] object ObjectReference contains enough information to let you inspect or modify the referred object. 10.1.1. .imagePullSecrets Description ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod Type array 10.1.2. .imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 10.1.3. .secrets Description Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret Type array 10.1.4. .secrets[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.2. API endpoints The following API endpoints are available: /api/v1/serviceaccounts GET : list or watch objects of kind ServiceAccount /api/v1/watch/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts DELETE : delete collection of ServiceAccount GET : list or watch objects of kind ServiceAccount POST : create a ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts/{name} DELETE : delete a ServiceAccount GET : read the specified ServiceAccount PATCH : partially update the specified ServiceAccount PUT : replace the specified ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} GET : watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /api/v1/serviceaccounts HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.1. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/serviceaccounts HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{namespace}/serviceaccounts HTTP method DELETE Description delete collection of ServiceAccount Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.5. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty HTTP method POST Description create a ServiceAccount Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body ServiceAccount schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{namespace}/serviceaccounts HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{namespace}/serviceaccounts/{name} Table 10.10. Global path parameters Parameter Type Description name string name of the ServiceAccount HTTP method DELETE Description delete a ServiceAccount Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.12. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty HTTP method GET Description read the specified ServiceAccount Table 10.13. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ServiceAccount Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.15. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ServiceAccount Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.17. Body parameters Parameter Type Description body ServiceAccount schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty 10.2.6. /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} Table 10.19. Global path parameters Parameter Type Description name string name of the ServiceAccount HTTP method GET Description watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_apis/serviceaccount-v1 |
8.37. createrepo | 8.37. createrepo 8.37.1. RHBA-2014:0491 - createrepo bug fix and enhancement update An updated createrepo package that fixes three bugs and adds two enhancements is now available for Red Hat Enterprise Linux 6. The createrepo package contains a set of utilities used to generate and maintain a common metadata repository from a directory of rpm packages. Bug Fixes BZ# 1035588 Previously, the createrepo utility did not test file locking correctly. As a consequence, createrepo terminated unexpectedly with a traceback when it was executed in a directory located on a Common Internet File System (CIFS) share provided by a NetApp storage appliance. The test for file locking has been corrected and createrepo now works as expected in the described situation. BZ# 1083185 Prior to this update, if the createrepo utility was executed with the "-i" or "--pkglist" options and the specified file name did not exist, createrepo terminated unexpectedly with a traceback. The createrepo utility has been modified to handle this error condition properly, and it now exits gracefully in this situation. BZ# 1088886 Prior to this update, the createrepo packages had descriptions which did not indicate that the maintenance utilities were present in the package. This update corrects this omission. In addition, this update adds the following Enhancements BZ# 952602 This update introduces support for the following new options to the modifyrepo utility: "--checksum", used to specify the checksum type; "--unique-md-filenames", used to include the file's checksum in the file name; and "--simple-md-filenames", used to not include the file's checksum in the file name. The "--unique-md-filenames" option is a default option for this utility. BZ# 1093713 Previously, certain options were not described in the modifyrepo(1) and mergerepo(1) man pages. These man pages now document the following modifyrepo utility command line options: "--mdtype", "--remove", "--compress", "--no-compress", "--compress-type", "--checksum", "--unique-md-filenames", "--simple-md-filenames", "--version", and "--help". These man pages also now document the following mergerepo utility command line options: "--no-database", "--compress-type", "--version", and "--help". Users of createrepo are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/createrepo |
Config APIs | Config APIs OpenShift Container Platform 4.17 Reference guide for config APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/index |
5.5. Hot Plugging vCPUs | 5.5. Hot Plugging vCPUs You can hot plug vCPUs. Hot plugging means enabling or disabling devices while a virtual machine is running. Important Hot unplugging a vCPU is only supported if the vCPU was previously hot plugged. A virtual machine's vCPUs cannot be hot unplugged to less vCPUs than it was originally created with. The following prerequisites apply: The virtual machine's Operating System must be explicitly set in the New Virtual Machine or Edit Virtual Machine window. The virtual machine's operating system must support CPU hot plug. See the table below for support details. Windows virtual machines must have the guest agents installed. See Installing the Guest Agents and Drivers on Windows . Hot Plugging vCPUs Click Compute Virtual Machines and select a running virtual machine. Click Edit . Click the System tab. Change the value of Virtual Sockets as required. Click OK . Table 5.1. Operating System Support Matrix for vCPU Hot Plug Operating System Version Architecture Hot Plug Supported Hot Unplug Supported Red Hat Enterprise Linux Atomic Host 7 x86 Yes Yes Red Hat Enterprise Linux 6.3+ x86 Yes Yes Red Hat Enterprise Linux 7.0+ x86 Yes Yes Red Hat Enterprise Linux 7.3+ PPC64 Yes Yes Red Hat Enterprise Linux 8.0+ x86 Yes Yes Microsoft Windows Server 2012 R2 All x64 Yes No Microsoft Windows Server 2016 Standard, Datacenter x64 Yes No Microsoft Windows Server 2019 Standard, Datacenter x64 Yes No Microsoft Windows 8.x All x86 Yes No Microsoft Windows 8.x All x64 Yes No Microsoft Windows 10 All x86 Yes No Microsoft Windows 10 All x64 Yes No | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/cpu_hot_plug |
2.6. Additional Resources | 2.6. Additional Resources For more information about installation in general, see the Red Hat Enterprise Linux 7 Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-security_tips_for_installations-additional_resources |
function::user_string2_n_warn | function::user_string2_n_warn Name function::user_string2_n_warn - Retrieves string from user space with alternative warning string Synopsis Arguments addr the user space address to retrieve the string from n the maximum length of the string (if not null terminated) warn_msg the warning message to return when data isn't available Description Returns up to n characters of a C string from a given user space memory address. Reports the given warning message on the rare cases when userspace data is not accessible and warns (but does not abort) about the failure. | [
"user_string2_n_warn:string(addr:long,n:long,warn_msg:string)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-string2-n-warn |
7.22. RHEA-2014:1514 - new packages: xmlsec1, lasso, mod_auth_mellon | 7.22. RHEA-2014:1514 - new packages: xmlsec1, lasso, mod_auth_mellon New xmlsec1, lasso, mod_auth_mellon packages are now available for Red Hat Enterprise Linux 6. The mod_auth_mellon packages provide the mod_auth_mellon module that is an authentication service implementing the Security Assertion Markup Language (SAML) federation protocol version 2.0. It grants access based on the attributes received in assertions generated by an IDP server. The lasso packages provide the Lasso library that implements the Liberty Alliance Single Sign On standards, including the SAML and SAML2 specifications. It allows handling of the whole life-cycle of SAML-based federations, and provides bindings for multiple languages. The xmlsec1 packages provide XML Security Library, a C library based on LibXML2 and OpenSSL. The library was created with a goal to support major XML security standards "XML Digital Signature" and "XML Encryption". This enhancement update adds the xmlsec1, lasso, and mod_auth_mellon packages to Red Hat Enterprise Linux 6 in order to provide SAML Service Provider support in the Apache HTTP server. (BZ# 1083605 , BZ# 1087555 , BZ# 1090812 ) All users who require support for SAML-based federations in the Apache HTTP server are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1514 |
Chapter 7. User Defined Functions | Chapter 7. User Defined Functions 7.1. User Defined Functions You can extend the Red Hat JBoss Data Virtualization function library by creating User Defined Functions (UDFs), as well as User Defined Aggregate Functions (UDAFs). The following are used to define a UDF: Function Name - When you create the function name, keep these requirements in mind: You cannot overload existing Red Hat JBoss Data Virtualization functions. The function name must be unique among user-defined functions in its model for the number of arguments. You can use the same function name for different numbers of types of arguments. Hence, you can overload your user-defined functions. The function name cannot contain the '.' character. The function name cannot exceed 255 characters. Input Parameters - defines a type specific signature list. All arguments are considered required. Return Type - the expected type of the returned scalar value. Pushdown - can be one of REQUIRED, NEVER, ALLOWED. Indicates the expected pushdown behavior. If NEVER or ALLOWED are specified then a Java implementation of the function should be supplied. If REQUIRED is used, then user must extend the Translator for the source and add this function to its pushdown function library. invocationClass/invocationMethod - optional properties indicating the static method to invoke when the UDF is not pushed down. Deterministic - if the method will always return the same result for the same input parameters. Defaults to false. It is important to mark the function as deterministic if it returns the same value for the same inputs as this will lead to better performance. See also the Relational extension boolean metadata property "deterministic" and the DDL OPTION property "determinism". Note If using the pushdown UDF in Teiid Designer, the user must create a source function on the source model, so that the parsing will work correctly. Pushdown scalar functions differ from normal user-defined functions in that no code is provided for evaluation in the engine. An exception will be raised if a pushdown required function cannot be evaluated by the appropriate source. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-user_defined_functions |
11.10. Replacing Hosts | 11.10. Replacing Hosts Before replacing hosts ensure that the new peer has the exact disk capacity as that of the one it is replacing. For example, if the peer in the cluster has two 100GB drives, then the new peer must have the same disk capacity and number of drives. Also, steps described in this section can be performed on other volumes types as well, refer to Section 11.9, "Migrating Volumes" when performing replace and reset operations on the volumes. 11.10.1. Replacing a Host Machine with a Different Hostname You can replace a failed host machine with another host that has a different hostname. In the following example the original machine which has had an irrecoverable failure is server0.example.com and the replacement machine is server5.example.com . The brick with an unrecoverable failure is server0.example.com:/rhgs/brick1 and the replacement brick is server5.example.com:/rhgs/brick1 . Stop the geo-replication session if configured by executing the following command. Probe the new peer from one of the existing peers to bring it into the cluster. Ensure that the new brick (server5.example.com:/rhgs/brick1) that is replacing the old brick (server0.example.com:/rhgs/brick1) is empty. If the geo-replication session is configured, perform the following steps: Setup the geo-replication session by generating the ssh keys: Create geo-replication session again with force option to distribute the keys from new nodes to Slave nodes. After successfully setting up the shared storage volume, when a new node is replaced in the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Retrieve the brick paths in server0.example.com using the following command: Brick path in server0.example.com is /rhgs/brick1 . This has to be replaced with the brick in the newly added host, server5.example.com . Create the required brick path in server5.example.com.For example, if /rhs/brick is the XFS mount point in server5.example.com, then create a brick directory in that path. Execute the replace-brick command with the force option: Verify that the new brick is online. Initiate self-heal on the volume. The status of the heal process can be seen by executing the command: The status of the heal process can be seen by executing the command: Detach the original machine from the trusted pool. Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica. In this example, the extended attributes trusted.afr.vol-client-0 and trusted.afr.vol-client-1 have zero values. This means that the data on the two bricks is identical. If these attributes are not zero after self-heal is completed, the data has not been synchronised correctly. Start the geo-replication session using force option: 11.10.2. Replacing a Host Machine with the Same Hostname You can replace a failed host with another node having the same FQDN (Fully Qualified Domain Name). A host in a Red Hat Gluster Storage Trusted Storage Pool has its own identity called the UUID generated by the glusterFS Management Daemon.The UUID for the host is available in /var/lib/glusterd/glusterd.info file. In the following example, the host with the FQDN as server0.example.com was irrecoverable and must to be replaced with a host, having the same FQDN. The following steps have to be performed on the new host. Stop the geo-replication session if configured by executing the following command. Stop the glusterd service on the server0.example.com. On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Retrieve the UUID of the failed host (server0.example.com) from another of the Red Hat Gluster Storage Trusted Storage Pool by executing the following command: Note that the UUID of the failed host is b5ab2ec3-5411-45fa-a30f-43bd04caf96b Edit the glusterd.info file in the new host and include the UUID of the host you retrieved in the step. Note The operating version of this node must be same as in other nodes of the trusted storage pool. Select any host (say for example, server1.example.com) in the Red Hat Gluster Storage Trusted Storage Pool and retrieve its UUID from the glusterd.info file. Gather the peer information files from the host (server1.example.com) in the step. Execute the following command in that host (server1.example.com) of the cluster. Remove the peer file corresponding to the failed host (server0.example.com) from the /tmp/peers directory. Note that the UUID corresponds to the UUID of the failed host (server0.example.com) retrieved in Step 3. Archive all the files and copy those to the failed host(server0.example.com). Copy the above created file to the new peer. Copy the extracted content to the /var/lib/glusterd/peers directory. Execute the following command in the newly added host with the same name (server0.example.com) and IP Address. Select any other host in the cluster other than the node (server1.example.com) selected in step 5. Copy the peer file corresponding to the UUID of the host retrieved in Step 5 to the new host (server0.example.com) by executing the following command: Start the glusterd service. If new brick has same hostname and same path, refer to Section 11.9.5, "Reconfiguring a Brick in a Volume" , and if it has different hostname and different brick path for replicated volumes then, refer to Section 11.9.2, "Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume" . In case of disperse volumes, when a new brick has different hostname and different brick path then, refer to Section 11.9.4, "Replacing an Old Brick with a New Brick on a Dispersed or Distributed-dispersed Volume" . Perform the self-heal operation on the restored volume. You can view the gluster volume self-heal status by executing the following command: If the geo-replication session is configured, perform the following steps: Setup the geo-replication session by generating the ssh keys: Create geo-replication session again with force option to distribute the keys from new nodes to Slave nodes. After successfully setting up the shared storage volume, when a new node is replaced in the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: Start the geo-replication session using force option: Replacing a host with the same Hostname in a two-node Red Hat Gluster Storage Trusted Storage Pool If there are only 2 hosts in the Red Hat Gluster Storage Trusted Storage Pool where the host server0.example.com must be replaced, perform the following steps: Stop the geo-replication session if configured by executing the following command: Stop the glusterd service on server0.example.com. On RHEL 7 and RHEL 8, run On RHEL 6, run Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Retrieve the UUID of the failed host (server0.example.com) from another peer in the Red Hat Gluster Storage Trusted Storage Pool by executing the following command: Note that the UUID of the failed host is b5ab2ec3-5411-45fa-a30f-43bd04caf96b Edit the glusterd.info file in the new host (server0.example.com) and include the UUID of the host you retrieved in the step. Note The operating version of this node must be same as in other nodes of the trusted storage pool. Create the peer file in the newly created host (server0.example.com) in /var/lib/glusterd/peers/<uuid-of-other-peer> with the name of the UUID of the other host (server1.example.com). UUID of the host can be obtained with the following: Example 11.6. Example to obtain the UUID of a host In this case the UUID of other peer is 1d9677dc-6159-405e-9319-ad85ec030880 Create a file /var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880 in server0.example.com, with the following command: The file you create must contain the following information: Continue to perform steps 12 to 18 as documented in the procedure. | [
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force",
"gluster peer probe server5.example.com",
"gluster system:: execute gsec_create",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force",
"mount -t glusterfs local node's ip :gluster_shared_storage /var/run/gluster/shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo local node's ip :/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume info <VOLNAME>",
"Volume Name: vol Type: Replicate Volume ID: 0xde822e25ebd049ea83bfaa3c4be2b440 Status: Started Snap Volume: no Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: server0.example.com:/rhgs/brick1 Brick2: server1.example.com:/rhgs/brick1 Options Reconfigured: cluster.granular-entry-heal: on performance.readdir-ahead: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable",
"mkdir /rhgs/brick1",
"gluster volume replace-brick vol server0.example.com:/rhgs/brick1 server5.example.com:/rhgs/brick1 commit force volume replace-brick: success: replace-brick commit successful",
"gluster volume status Status of volume: vol Gluster process Port Online Pid Brick server5.example.com:/rhgs/brick1 49156 Y 5731 Brick server1.example.com:/rhgs/brick1 49153 Y 5354",
"gluster volume heal VOLNAME",
"gluster volume heal VOLNAME info",
"gluster peer detach (server) All clients mounted through the peer which is getting detached need to be remounted, using one of the other active peers in the trusted storage pool, this ensures that the client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y peer detach: success",
"getfattr -d -m. -e hex /rhgs/brick1 getfattr: Removing leading '/' from absolute path names #file: rhgs/brick1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force",
"systemctl stop glusterd",
"service glusterd stop",
"gluster peer status Number of Peers: 2 Hostname: server1.example.com Uuid: 1d9677dc-6159-405e-9319-ad85ec030880 State: Peer in Cluster (Connected) Hostname: server0.example.com Uuid: b5ab2ec3-5411-45fa-a30f-43bd04caf96b State: Peer Rejected (Connected)",
"cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703",
"grep -i uuid /var/lib/glusterd/glusterd.info UUID=8cc6377d-0153-4540-b965-a4015494461c",
"cp -a /var/lib/glusterd/peers /tmp/",
"rm /tmp/peers/b5ab2ec3-5411-45fa-a30f-43bd04caf96b",
"cd /tmp; tar -cvf peers.tar peers",
"scp /tmp/peers.tar [email protected]:/tmp",
"tar -xvf /tmp/peers.tar # cp peers/* /var/lib/glusterd/peers/",
"scp /var/lib/glusterd/peers/<UUID-retrieved-from-step5> root@Example1:/var/lib/glusterd/peers/",
"systemctl start glusterd",
"gluster volume heal VOLNAME",
"gluster volume heal VOLNAME info",
"gluster system:: execute gsec_create",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force",
"mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage # cp /etc/fstab /var/run/gluster/fstab.tmp # echo \"<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop force",
"systemctl stop glusterd",
"service glusterd stop",
"gluster peer status Number of Peers: 1 Hostname: server0.example.com Uuid: b5ab2ec3-5411-45fa-a30f-43bd04caf96b State: Peer Rejected (Connected)",
"cat /var/lib/glusterd/glusterd.info UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b operating-version=30703",
"gluster system:: uuid get",
"For example, gluster system:: uuid get UUID: 1d9677dc-6159-405e-9319-ad85ec030880",
"touch /var/lib/glusterd/peers/1d9677dc-6159-405e-9319-ad85ec030880",
"UUID=<uuid-of-other-node> state=3 hostname=<hostname>"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Replacing_Hosts |
Part II. IBM Power Systems - Installation and Booting | Part II. IBM Power Systems - Installation and Booting This part of the Red Hat Enterprise Linux Installation Guide includes information about installation and basic post-installation troubleshooting for IBM Power Systems servers. IBM Power Systems servers include IBM PowerLinux servers and POWER7, POWER8, and POWER9 Power Systems servers running Linux. For advanced installation options, see Part IV, "Advanced Installation Options" . Important releases of Red Hat Enterprise Linux supported 32-bit and 64-bit Power Systems servers ( ppc and ppc64 , respectively). Red Hat Enterprise Linux 7 supports only 64-bit Power Systems servers ( ppc64 ). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/part-installation-ibm-power |
Chapter 9. MySQL | Chapter 9. MySQL The MySQL database is a multi-user, multi-threaded SQL database server that consists of the MySQL server daemon ( mysqld ) and many client programs and libraries. [7] In Red Hat Enterprise Linux, the mysql-server package provides MySQL. Run the rpm -q mysql-server command to see if the mysql-server package is installed. If it is not installed, run the following command as the root user to install it: 9.1. MySQL and SELinux When MySQL is enabled, it runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the MySQL processes running in their own domain. This example assumes the mysql package is installed: Run the getenforce command to confirm SELinux is running in enforcing mode: The getenforce command returns Enforcing when SELinux is running in enforcing mode. Run the service mysqld start command as the root user to start mysqld : Run the ps -eZ | grep mysqld command to view the mysqld processes: The SELinux context associated with the mysqld processes is unconfined_u:system_r:mysqld_t:s0 . The second last part of the context, mysqld_t , is the type. A type defines a domain for processes and a type for files. In this case, the mysqld processes are running in the mysqld_t domain. [7] Refer to the MySQL project page for more information. | [
"~]# yum install mysql-server",
"~]USD getenforce Enforcing",
"~]# service mysqld start Initializing MySQL database: Installing MySQL system tables... [ OK ] Starting MySQL: [ OK ]",
"~]USD ps -eZ | grep mysqld unconfined_u:system_r:mysqld_safe_t:s0 6035 pts/1 00:00:00 mysqld_safe unconfined_u:system_r:mysqld_t:s0 6123 pts/1 00:00:00 mysqld"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/chap-managing_confined_services-mysql |
Chapter 7. ConsoleQuickStart [console.openshift.io/v1] | Chapter 7. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleQuickStartSpec is the desired quick start configuration. 7.1.1. .spec Description ConsoleQuickStartSpec is the desired quick start configuration. Type object Required description displayName durationMinutes introduction tasks Property Type Description accessReviewResources array accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. accessReviewResources[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface conclusion string conclusion sums up the Quick Start and suggests the possible steps. (includes markdown) description string description is the description of the Quick Start. (includes markdown) displayName string displayName is the display name of the Quick Start. durationMinutes integer durationMinutes describes approximately how many minutes it will take to complete the Quick Start. icon string icon is a base64 encoded image that will be displayed beside the Quick Start display name. The icon should be an vector image for easy scaling. The size of the icon should be 40x40. introduction string introduction describes the purpose of the Quick Start. (includes markdown) nextQuickStart array (string) nextQuickStart is a list of the following Quick Starts, suggested for the user to try. prerequisites array (string) prerequisites contains all prerequisites that need to be met before taking a Quick Start. (includes markdown) tags array (string) tags is a list of strings that describe the Quick Start. tasks array tasks is the list of steps the user has to perform to complete the Quick Start. tasks[] object ConsoleQuickStartTask is a single step in a Quick Start. 7.1.2. .spec.accessReviewResources Description accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. Type array 7.1.3. .spec.accessReviewResources[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 7.1.4. .spec.tasks Description tasks is the list of steps the user has to perform to complete the Quick Start. Type array 7.1.5. .spec.tasks[] Description ConsoleQuickStartTask is a single step in a Quick Start. Type object Required description title Property Type Description description string description describes the steps needed to complete the task. (includes markdown) review object review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. summary object summary contains information about the passed step. title string title describes the task and is displayed as a step heading. 7.1.6. .spec.tasks[].review Description review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. Type object Required failedTaskHelp instructions Property Type Description failedTaskHelp string failedTaskHelp contains suggestions for a failed task review and is shown at the end of task. (includes markdown) instructions string instructions contains steps that user needs to take in order to validate his work after going through a task. (includes markdown) 7.1.7. .spec.tasks[].summary Description summary contains information about the passed step. Type object Required failed success Property Type Description failed string failed briefly describes the unsuccessfully passed task. (includes markdown) success string success describes the succesfully passed task. 7.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolequickstarts DELETE : delete collection of ConsoleQuickStart GET : list objects of kind ConsoleQuickStart POST : create a ConsoleQuickStart /apis/console.openshift.io/v1/consolequickstarts/{name} DELETE : delete a ConsoleQuickStart GET : read the specified ConsoleQuickStart PATCH : partially update the specified ConsoleQuickStart PUT : replace the specified ConsoleQuickStart 7.2.1. /apis/console.openshift.io/v1/consolequickstarts Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleQuickStart Table 7.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleQuickStart Table 7.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStartList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleQuickStart Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 202 - Accepted ConsoleQuickStart schema 401 - Unauthorized Empty 7.2.2. /apis/console.openshift.io/v1/consolequickstarts/{name} Table 7.9. Global path parameters Parameter Type Description name string name of the ConsoleQuickStart Table 7.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleQuickStart Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.12. Body parameters Parameter Type Description body DeleteOptions schema Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleQuickStart Table 7.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleQuickStart Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.17. Body parameters Parameter Type Description body Patch schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleQuickStart Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/console_apis/consolequickstart-console-openshift-io-v1 |
Preparing for disaster recovery with Identity Management | Preparing for disaster recovery with Identity Management Red Hat Enterprise Linux 8 Mitigating the effects of server and data loss scenarios in IdM environments Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/preparing_for_disaster_recovery_with_identity_management/index |
Chapter 6. Cluster Operators reference | Chapter 6. Cluster Operators reference This reference guide indexes the cluster Operators shipped by Red Hat that serve as the architectural foundation for OpenShift Container Platform. Cluster Operators are installed by default, unless otherwise noted, and are managed by the Cluster Version Operator (CVO). For more details on the control plane architecture, see Operators in OpenShift Container Platform . Cluster administrators can view cluster Operators in the OpenShift Container Platform web console from the Administration Cluster Settings page. Note Cluster Operators are not managed by Operator Lifecycle Manager (OLM) and OperatorHub. OLM and OperatorHub are part of the Operator Framework used in OpenShift Container Platform for installing and running optional add-on Operators . Some of the following cluster Operators can be disabled prior to installation. For more information see cluster capabilities . 6.1. Cluster Baremetal Operator Note The Cluster Baremetal Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. Project cluster-baremetal-operator Additional resources Bare-metal capability 6.2. Bare Metal Event Relay Purpose The OpenShift Bare Metal Event Relay manages the life-cycle of the Bare Metal Event Relay. The Bare Metal Event Relay enables you to configure the types of cluster event that are monitored using Redfish hardware events. Configuration objects You can use this command to edit the configuration after installation: for example, the webhook port. You can edit configuration objects with: USD oc -n [namespace] edit cm hw-event-proxy-operator-manager-config apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org Project hw-event-proxy-operator CRD The proxy enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure, reported using the HardwareEvent CR. hardwareevents.event.redhat-cne.org : Scope: Namespaced CR: HardwareEvent Validation: Yes 6.3. Cloud Credential Operator Purpose The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Project openshift-cloud-credential-operator CRDs credentialsrequests.cloudcredential.openshift.io Scope: Namespaced CR: CredentialsRequest Validation: Yes Configuration objects No configuration required. Additional resources About the Cloud Credential Operator CredentialsRequest custom resource 6.4. Cluster Authentication Operator Purpose The Cluster Authentication Operator installs and maintains the Authentication custom resource in a cluster and can be viewed with: USD oc get clusteroperator authentication -o yaml Project cluster-authentication-operator 6.5. Cluster Autoscaler Operator Purpose The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider. Project cluster-autoscaler-operator CRDs ClusterAutoscaler : This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to the ClusterAutoscaler resource named default in the managed namespace, the value of the WATCH_NAMESPACE environment variable. MachineAutoscaler : This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, the min and max size. Currently only MachineSet objects can be targeted. 6.6. Cluster Cloud Controller Manager Operator Purpose Note This Operator is General Availability for Microsoft Azure Stack Hub, IBM Cloud, Nutanix, Red Hat OpenStack Platform (RHOSP), and VMware vSphere. It is available as a Technology Preview for Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud Power VS, and Microsoft Azure. The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. It is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Cloud configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-cloud-controller-manager-operator 6.7. Cluster CAPI Operator Note This Operator is available as a Technology Preview for Amazon Web Services (AWS) and Google Cloud Platform (GCP). Purpose The Cluster CAPI Operator maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. Project cluster-capi-operator CRDs awsmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachine Validation: No gcpmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachine Validation: No awsmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachinetemplate Validation: No gcpmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachinetemplate Validation: No 6.8. Cluster Config Operator Purpose The Cluster Config Operator performs the following tasks related to config.openshift.io : Creates CRDs. Renders the initial custom resources. Handles migrations. Project cluster-config-operator 6.9. Cluster CSI Snapshot Controller Operator Note The Cluster CSI Snapshot Controller Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Project cluster-csi-snapshot-controller-operator Additional resources CSI snapshot controller capability 6.10. Cluster Image Registry Operator Purpose The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. Project cluster-image-registry-operator 6.11. Cluster Machine Approver Operator Purpose The Cluster Machine Approver Operator automatically approves the CSRs requested for a new worker node after cluster installation. Note For the control plane node, the approve-csr service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase. Project cluster-machine-approver-operator 6.12. Cluster Monitoring Operator Purpose The Cluster Monitoring Operator (CMO) manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform. Project openshift-monitoring CRDs alertmanagers.monitoring.coreos.com Scope: Namespaced CR: alertmanager Validation: Yes prometheuses.monitoring.coreos.com Scope: Namespaced CR: prometheus Validation: Yes prometheusrules.monitoring.coreos.com Scope: Namespaced CR: prometheusrule Validation: Yes servicemonitors.monitoring.coreos.com Scope: Namespaced CR: servicemonitor Validation: Yes Configuration objects USD oc -n openshift-monitoring edit cm cluster-monitoring-config 6.13. Cluster Network Operator Purpose The Cluster Network Operator installs and upgrades the networking components on an OpenShift Container Platform cluster. 6.14. Cluster Samples Operator Note The Cluster Samples Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the image stream import logic in the OpenShift image registry and API server to authenticate with registry.redhat.io . An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample image streams. If created, those secrets contain the content of a config.json for docker needed to facilitate image import. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches the compatible release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the image streams directly. The samples resource includes a finalizer, which cleans up the following upon its deletion: Operator-managed image streams Operator-managed templates Operator-generated configuration resources Cluster status resources Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. Project cluster-samples-operator Additional resources OpenShift samples capability 6.15. Cluster Storage Operator Note The Cluster Storage Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Project cluster-storage-operator Configuration No configuration is required. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. Additional resources Storage capability 6.16. Cluster Version Operator Purpose Cluster Operators manage specific areas of cluster functionality. The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. The CVO also checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph by collecting the status of both the cluster version and its cluster Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. For more information regarding cluster version condition types, see "Understanding cluster version condition types". Project cluster-version-operator Additional resources Understanding cluster version condition types 6.17. Console Operator Note The Console Operator is an optional cluster capability that can be disabled by cluster administrators during installation. If you disable the Console Operator at installation, your cluster is still supported and upgradable. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Project console-operator Additional resources Web console capability 6.18. Control Plane Machine Set Operator Note This Operator is available for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and VMware vSphere. Purpose The Control Plane Machine Set Operator automates the management of control plane machine resources within an OpenShift Container Platform cluster. Project cluster-control-plane-machine-set-operator CRDs controlplanemachineset.machine.openshift.io Scope: Namespaced CR: ControlPlaneMachineSet Validation: Yes Additional resources About control plane machine sets ControlPlaneMachineSet custom resource 6.19. DNS Operator Purpose The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. The Operator creates a working default deployment based on the cluster's configuration. The default cluster domain is cluster.local . Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported. The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster. Project cluster-dns-operator 6.20. etcd cluster Operator Purpose The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures. Project cluster-etcd-operator CRDs etcds.operator.openshift.io Scope: Cluster CR: etcd Validation: Yes Configuration objects USD oc edit etcd cluster 6.21. Ingress Operator Purpose The Ingress Operator configures and manages the OpenShift Container Platform router. Project openshift-ingress-operator CRDs clusteringresses.ingress.openshift.io Scope: Namespaced CR: clusteringresses Validation: No Configuration objects Cluster config Type Name: clusteringresses.ingress.openshift.io Instance Name: default View Command: USD oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml Notes The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router: USD oc get deployment -n openshift-ingress The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed Ingress Controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr , then the Ingress Controller operates in IPv6-only mode. In the following example, Ingress Controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr : USD oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' Example output map[cidr:10.128.0.0/14 hostPrefix:23] 6.22. Insights Operator Note The Insights Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Project insights-operator Configuration No configuration is required. Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Insights capability See About remote health monitoring for details about Insights Operator and Telemetry. 6.23. Kubernetes API Server Operator Purpose The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed using the Cluster Version Operator (CVO). Project openshift-kube-apiserver-operator CRDs kubeapiservers.operator.openshift.io Scope: Cluster CR: kubeapiserver Validation: Yes Configuration objects USD oc edit kubeapiserver 6.24. Kubernetes Controller Manager Operator Purpose The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift Container Platform library-go framework and it is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-controller-manager-operator 6.25. Kubernetes Scheduler Operator Purpose The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO). The Kubernetes Scheduler Operator contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-scheduler-operator Configuration The configuration for the Kubernetes Scheduler is the result of merging: a default configuration. an observed configuration from the spec schedulers.config.openshift.io . All of these are sparse configurations, invalidated JSON snippets which are merged to form a valid configuration at the end. 6.26. Kubernetes Storage Version Migrator Operator Purpose The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests. Project cluster-kube-storage-version-migrator-operator 6.27. Machine API Operator Purpose The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster. Project machine-api-operator CRDs MachineSet Machine MachineHealthCheck 6.28. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources About the OpenShift SDN network plugin . Project openshift-machine-config-operator 6.29. Marketplace Operator Note The Marketplace Operator is an optional cluster capability that can be disabled by cluster administrators if it is not needed. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. Project operator-marketplace Additional resources Marketplace capability 6.30. Node Tuning Operator Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Project cluster-node-tuning-operator Additional resources Low latency tuning of OCP nodes 6.31. OpenShift API Server Operator Purpose The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster. Project openshift-apiserver-operator CRDs openshiftapiservers.operator.openshift.io Scope: Cluster CR: openshiftapiserver Validation: Yes 6.32. OpenShift Controller Manager Operator Purpose The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager custom resource in a cluster and can be viewed with: USD oc get clusteroperator openshift-controller-manager -o yaml The custom resource definition (CRD) openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with: USD oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml Project cluster-openshift-controller-manager-operator 6.33. Operator Lifecycle Manager Operators Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 6.1. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.13, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. CRDs Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 6.1. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 6.2. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. Additional resources For more information, see the sections on understanding Operator Lifecycle Manager (OLM) . 6.34. OpenShift Service CA Operator Purpose The OpenShift Service CA Operator mints and manages serving certificates for Kubernetes services. Project openshift-service-ca-operator 6.35. vSphere Problem Detector Operator Purpose The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. Note The vSphere Problem Detector Operator is only started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. Configuration No configuration is required. Notes The Operator supports OpenShift Container Platform installations on vSphere. The Operator uses the vsphere-cloud-credentials to communicate with vSphere. The Operator performs checks that are related to storage. Additional resources For more details, see Using the vSphere Problem Detector Operator . | [
"oc -n [namespace] edit cm hw-event-proxy-operator-manager-config",
"apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operators/cluster-operators-ref |
Chapter 1. Introduction to transactions | Chapter 1. Introduction to transactions This chapter introduces transactions by discussing some basic transaction concepts as well as the service qualities that are important in a transaction manager. The information is organized as follows: Section 1.1, "What is a transaction?" Section 1.2, "ACID properties of a transaction" Section 1.3, "About transaction clients" Section 1.4, "Descriptions of transaction terms" Section 1.5, "Managing transactions that modify multiple resources" Section 1.6, "Relationship between transactions and threads" Section 1.7, "About transaction service qualities" 1.1. What is a transaction? The prototype of a transaction is an operation that conceptually consists of a single step (for example, transfer money from account A to account B), but must be implemented as a series of steps. Such operations are vulnerable to system failures because a failure is likely to leave some of the steps unfinished, which leaves the system in an inconsistent state. For example, consider the operation of transferring money from account A to account B. Suppose that the system fails after debiting account A, but before crediting account B. The result is that some money disappears. To ensure that an operation like this is reliable, implement it as a transaction . A transaction guarantees reliable execution because it is atomic, consistent, isolated, and durable. These properties are referred to as a transaction's ACID properties. 1.2. ACID properties of a transaction The ACID properties of a transaction are defined as follows: Atomic -a transaction is an all or nothing procedure. Individual updates are assembled and either committed or aborted (rolled back) simultaneously when the transaction completes. Consistent -a transaction is a unit of work that takes a system from one consistent state to another consistent state. Isolated -while a transaction is executing, its partial results are hidden from other entities. Durable -the results of a transaction are persistent even if the system fails immediately after a transaction has been committed. 1.3. About transaction clients A transaction client is an API or object that enables you to initiate and end transactions. Typically, a transaction client exposes operations that begin , commit , or roll back a transaction. In a standard JavaEE application, the javax.transaction.UserTransaction interface exposes the transaction client API. In the context of the Spring Framework, Spring Boot, the org.springframework.transaction.PlatformTransactionManager interface exposes a transaction client API. 1.4. Descriptions of transaction terms The following table defines some important transaction terms: Term Description Demarcation Transaction demarcation refers to starting and ending transactions. Ending transactions means that the work done in the transaction is either committed or rolled back. Demarcation can be explicit, for example, by calling a transaction client API, or implicit, for example, whenever a message is polled from a transactional endpoint. For details, see Chapter 9, Writing a Camel application that uses transactions . Resources A resource is any component of a computer system that can undergo a persistent or permanent change. In practice, a resource is almost always a database or a service layered over a database, for example, a message service with persistence. Other kinds of resource are conceivable, however. For example, an Automated Teller Machine (ATM) is a kind of resource. After a customer has physically accepted cash from the machine, the transaction cannot be reversed. Transaction manager A transaction manager is responsible for coordinating transactions across one or more resources. In many cases, a transaction manager is built into a resource. For example, enterprise-level databases typically include a transaction manager that is capable of managing transactions that change content in that database. Transactions that involve more than one resource usually require an external transaction manager. Transaction context A transaction context is an object that encapsulates the information needed to keep track of a transaction. The format of a transaction context depends entirely on the relevant transaction manager implementation. At a minimum, the transaction context contains a unique transaction identifier. Distributed transactions A distributed transaction refers to a transaction in a distributed system, where the transaction scope spans multiple network nodes. A basic prerequisite for supporting distributed transactions is a network protocol that supports transmission of transaction contexts in a canonical format. Distributed transactions are outside the scope of Apache Camel transactions. See also: Section 3.2.3, "About distributed transaction managers" . X/Open XA standard The X/Open XA standard describes an interface for integrating resources with a transaction manager. To manage a transaction that includes more than one resource, participating resources must support the XA standard. Resources that support the XA standard expose a special object, the XA switch , which enables transaction managers (or transaction processing monitors) to take control of the resource's transactions. The XA standard supports both the 1-phase commit protocol and the 2-phase commit protocol. 1.5. Managing transactions that modify multiple resources For transactions that involve a single resource, the transaction manager built into the resource can usually be used. For transactions that involve multiple resources, it is necessary to use an external transaction manager or a transaction processing (TP) monitor. In this case, the resources must be integrated with the transaction manager by registering their XA switches. There is an important difference between the protocol that is used to commit a transaction that operates on a single-resource system and the protocol that is used to commit a transaction that operates on a multiple-resource systems: 1-phase commit -is for single-resource systems. This protocol commits a transaction in a single step. 2-phase commit -is for multiple-resource systems. This protocol commits a transaction in two steps. Including multiple resources in a transaction adds the risk that a system failure might occur after committing the transaction on some, but not all, of the resources. This would leave the system in an inconsistent state. The 2-phase commit protocol is designed to eliminate this risk. It ensures that the system can always be restored to a consistent state after it is restarted. 1.6. Relationship between transactions and threads To understand transaction processing, it is crucial to appreciate the basic relationship between transactions and threads: transactions are thread-specific. That is, when a transaction is started, it is attached to a specific thread. (Technically, a transaction context object is created and associated with the current thread). From this point until the transaction ends, all of the activity in the thread occurs within this transaction scope. Activity in any other thread does not fall within this transaction's scope. However, activity in any other thread can fall within the scope of some other transaction. This relationship between transactions and thread means: An application can process multiple transactions simultaneously as long as each transaction is created in a separate thread. Beware of creating subthreads within a transaction . If you are in the middle of a transaction and you create a new pool of threads, for example, by calling the threads() Camel DSL command, the new threads are not in the scope of the original transaction. Beware of processing steps that implicitly create new threads for the same reason given in the preceding point. Transaction scopes do not usually extend across route segments . That is, if one route segment ends with to( JoinEndpoint ) and another route segment starts with from( JoinEndpoint ) , these route segments typically do not belong to the same transaction. There are exceptions, however. Note Some advanced transaction manager implementations give you the freedom to detach and attach transaction contexts to and from threads at will. For example, this makes it possible to move a transaction context from one thread to another thread. In some cases, it is also possible to attach a single transaction context to multiple threads. 1.7. About transaction service qualities When it comes to choosing the products that implement your transaction system, there is a great variety of database products and transaction managers available, some free of charge and some commercial. All of them have nominal support for transaction processing, but there are considerable variations in the qualities of service supported by these products. This section provides a brief guide to the kind of features that you need to consider when comparing the reliability and sophistication of different transaction products. 1.7.1. Qualities of service provided by resources The following features determine the quality of service of a resource: Section 1.7.1.1, "Transaction isolation levels" Section 1.7.1.2, "Support for the XA standard" 1.7.1.1. Transaction isolation levels ANSI SQL defines four transaction isolation levels , as follows: SERIALIZABLE Transactions are perfectly isolated from each other. That is, nothing that one transaction does can affect any other transaction until the transaction is committed. This isolation level is described as serializable , because the effect is as if all transactions were executed one after the other (although in practice, the resource can often optimize the algorithm, so that some transactions are allowed to proceed simultaneously). REPEATABLE_READ Every time a transaction reads or updates the database, a read or write lock is obtained and held until the end of the transaction. This provides almost perfect isolation. But there is one case where isolation is not perfect. Consider a SQL SELECT statement that reads a range of rows by using a WHERE clause. If another transaction adds a row to this range while the first transaction is running, the first transaction can see this new row, if it repeats the SELECT call (a phantom read ). READ_COMMITTED Write locks are held until the end of a transaction. Read locks are not held until the end of a transaction. Consequently, repeated reads can give different results because updates committed by other transactions become visible to an ongoing transaction. READ_UNCOMMITTED Neither read locks nor write locks are held until the end of a transaction. Hence, dirty reads are possible. A dirty ready is when uncommitted changes made by other transactions are visible to an ongoing transaction. Databases generally do not support all of the different transaction isolation levels. For example, some free databases support only READ_UNCOMMITTED . Also, some databases implement transaction isolation levels in ways that are subtly different from the ANSI standard. Isolation is a complicated issue that involves trade offs with database performance (for example, see Isolation in Wikipedia ). 1.7.1.2. Support for the XA standard For a resource to participate in a transaction that involves multiple resources, it needs to support the X/Open XA standard. Be sure to check whether the resource's implementation of the XA standard is subject to any special restrictions. For example, some implementations of the XA standard are restricted to a single database connection, which implies that only one thread at a time can process a transaction that involves that resource. 1.7.2. Qualities of service provided by transaction managers The following features determine the quality of service of a transaction manager: Section 1.7.2.1, "Support for suspend/resume and attach/detach" . Section 1.7.2.2, "Support for multiple resources" . Section 1.7.2.3, "Distributed transactions" . Section 1.7.2.4, "Transaction monitoring" . Section 1.7.2.5, "Recovery from failure" . 1.7.2.1. Support for suspend/resume and attach/detach Some transaction managers support advanced capabilities for manipulating the associations between a transaction context and application threads, as follows: Suspend/resume current transaction -enables you to suspend temporarily the current transaction context, while the application does some non-transactional work in the current thread. Attach/detach transaction context -enables you to move a transaction context from one thread to another or to extend a transaction scope to include multiple threads. 1.7.2.2. Support for multiple resources A key differentiator for transaction managers is the ability to support multiple resources. This normally entails support for the XA standard, where the transaction manager provides a way for resources to register their XA switches. Note Strictly speaking, the XA standard is not the only approach you can use to support multiple resources, but it is the most practical one. The alternative typically involves writing tedious (and critical) custom code to implement the algorithms normally provided by an XA switch. 1.7.2.3. Distributed transactions Some transaction managers have the capability to manage transactions whose scope includes multiple nodes in a distributed system. The transaction context is propagated from node to node by using special protocols such as WS-AtomicTransactions or CORBA OTS. 1.7.2.4. Transaction monitoring Advanced transaction managers typically provide visual tools to monitor the status of pending transactions. This kind of tool is particularly useful after a system failure, where it can help to identify and resolve transactions that were left in an uncertain state (heuristic exceptions). 1.7.2.5. Recovery from failure There are significant differences among transaction managers with respect to their robustness in the event of a system failure (crash). The key strategy that transaction managers use is to write data to a persistent log before performing each step of a transaction. In the event of a failure, the data in the log can be used to recover the transaction. Some transaction managers implement this strategy more carefully than others. For example, a high-end transaction manager would typically duplicate the persistent transaction log and allow each of the logs to be stored on separate host machines. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_transaction_guide/introduction-to-transactions |
Chapter 2. Apache Maven and Red Hat Decision Manager Spring Boot applications | Chapter 2. Apache Maven and Red Hat Decision Manager Spring Boot applications Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the online Maven repository with your Spring Boot projects or you can download the Red Hat Decision Manager Maven repository. The recommended approach is to use the online Maven repository with your Spring Boot projects. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/maven-con_business-applications |
Chapter 9. Using canonicalized DNS host names in IdM | Chapter 9. Using canonicalized DNS host names in IdM DNS canonicalization is disabled by default on Identity Management (IdM) clients to avoid potential security risks. For example, if an attacker controls the DNS server and a host in the domain, the attacker can cause the short host name, such as demo , to resolve to a compromised host, such as malicious.example.com . In this case, the user connects to a different server than expected. This procedure describes how to use canonicalized host names on IdM clients. 9.1. Adding an alias to a host principal By default, Identity Management (IdM) clients enrolled by using the ipa-client-install command do not allow to use short host names in service principals. For example, users can use only host/[email protected] instead of host/[email protected] when accessing a service. Follow this procedure to add an alias to a Kerberos principal. Note that you can alternatively enable canonicalization of host names in the /etc/krb5.conf file. For details, see Enabling canonicalization of host names in service principals on clients . Prerequisites The IdM client is installed. The host name is unique in the network. Procedure Authenticate to IdM as the admin user: Add the alias to the host principal. For example, to add the demo alias to the demo.examle.com host principal: 9.2. Enabling canonicalization of host names in service principals on clients Follow this procedure to enable canonicalization of host names in services principals on clients. Note that if you use host principal aliases, as described in Adding an alias to a host principal , you do not need to enable canonicalization. Prerequisites The Identity Management (IdM) client is installed. You are logged in to the IdM client as the root user. The host name is unique in the network. Procedure Set the dns_canonicalize_hostname parameter in the [libdefaults] section in the /etc/krb5.conf file to false : 9.3. Options for using host names with DNS host name canonicalization enabled If you set dns_canonicalize_hostname = true in the /etc/krb5.conf file as explained in Enabling canonicalization of host names in service principals on clients , you have the following options when you use a host name in a service principal: In Identity Management (IdM) environments, you can use the full host name in a service principal, such as host/[email protected] . In environments without IdM, but if the RHEL host as a member of an Active Directory (AD) domain, no further considerations are required, because AD domain controllers (DC) automatically create service principals for NetBIOS names of the machines enrolled into AD. | [
"kinit admin",
"ipa host-add-principal demo.example.com --principal= demo",
"[libdefaults] dns_canonicalize_hostname = true"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/working_with_dns_in_identity_management/using-canonicalized-dns-host-names-in-idm_working-with-dns-in-identity-management |
Chapter 4. Stretch clusters for Ceph storage | Chapter 4. Stretch clusters for Ceph storage As a storage administrator, you can configure stretch clusters by entering stretch mode with 2-site clusters. Red Hat Ceph Storage is capable of withstanding the loss of Ceph OSDs because of its network and cluster, which are equally reliable with failures randomly distributed across the CRUSH map. If a number of OSDs is shut down, the remaining OSDs and monitors still manage to operate. However, this might not be the best solution for some stretched cluster configurations where a significant part of the Ceph cluster can use only a single network component. The example is a single cluster located in multiple data centers, for which the user wants to sustain a loss of a full data center. The standard configuration is with two data centers. Other configurations are in clouds or availability zones. Each site holds two copies of the data, therefore, the replication size is four. The third site should have a tiebreaker monitor, this can be a virtual machine or high-latency compared to the main sites. This monitor chooses one of the sites to restore data if the network connection fails and both data centers remain active. Note The standard Ceph configuration survives many failures of the network or data centers and it never compromises data consistency. If you restore enough Ceph servers following a failure, it recovers. Ceph maintains availability if you lose a data center, but can still form a quorum of monitors and have all the data available with enough copies to satisfy pools' min_size , or CRUSH rules that replicate again to meet the size. Note There are no additional steps to power down a stretch cluster. You can see the Powering down and rebooting Red Hat Ceph Storage cluster for more information. Stretch cluster failures Red Hat Ceph Storage never compromises on data integrity and consistency. If there is a network failure or a loss of nodes and the services can still be restored, Ceph returns to normal functionality on its own. However, there are situations where you lose data availability even if you have enough servers available to meet Ceph's consistency and sizing constraints, or where you unexpectedly do not meet the constraints. First important type of failure is caused by inconsistent networks. If there is a network split, Ceph might be unable to mark OSD as down to remove it from the acting placement group (PG) sets despite the primary OSD being unable to replicate data. When this happens, the I/O is not permitted because Ceph cannot meet its durability guarantees. The second important category of failures is when it appears that you have data replicated across data enters, but the constraints are not sufficient to guarantee this. For example, you might have data centers A and B, and the CRUSH rule targets three copies and places a copy in each data center with a min_size of 2 . The PG might go active with two copies in site A and no copies in site B, which means that if you lose site A, you lose the data and Ceph cannot operate on it. This situation is difficult to avoid with standard CRUSH rules. 4.1. Stretch mode for a storage cluster To configure stretch clusters, you must enter the stretch mode. When stretch mode is enabled, the Ceph OSDs only take PGs as active when they peer across data centers, or whichever other CRUSH bucket type you specified, assuming both are active. Pools increase in size from the default three to four, with two copies on each site. In stretch mode, Ceph OSDs are only allowed to connect to monitors within the same data center. New monitors are not allowed to join the cluster without specified location. If all the OSDs and monitors from a data center become inaccessible at once, the surviving data center will enter a degraded stretch mode. This issues a warning, reduces the min_size to 1 , and allows the cluster to reach an active state with the data from the remaining site. Note The degraded state also triggers warnings that the pools are too small, because the pool size does not get changed. However, a special stretch mode flag prevents the OSDs from creating extra copies in the remaining data center, therefore it still keeps 2 copies. When the missing data center becomes accesible again, the cluster enters recovery stretch mode. This changes the warning and allows peering, but still requires only the OSDs from the data center, which was up the whole time. When all PGs are in a known state and are not degraded or incomplete, the cluster goes back to the regular stretch mode, ends the warning, and restores min_size to its starting value 2 . The cluster again requires both sites to peer, not only the site that stayed up the whole time, therefore you can fail over to the other site, if necessary. Stretch mode limitations It is not possible to exit from stretch mode once it is entered. You cannot use erasure-coded pools with clusters in stretch mode. You can neither enter the stretch mode with erasure-coded pools, nor create an erasure-coded pool when the stretch mode is active. Stretch mode with no more than two sites is supported. The weights of the two sites should be the same. If they are not, you receive the following error: Example To achieve same weights on both sites, the Ceph OSDs deployed in the two sites should be of equal size, that is, storage capacity in the first site is equivalent to storage capacity in the second site. While it is not enforced, you should run two Ceph monitors on each site and a tiebreaker, for a total of five. This is because OSDs can only connect to monitors in their own site when in stretch mode. You have to create your own CRUSH rule, which provides two copies on each site, which totals to four on both sites. You cannot enable stretch mode if you have existing pools with non-default size or min_size . Because the cluster runs with min_size 1 when degraded, you should only use stretch mode with all-flash OSDs. This minimizes the time needed to recover once connectivity is restored, and minimizes the potential for data loss. Additional Resources See Troubleshooting clusters in stretch mode for troubleshooting steps. 4.1.1. Setting the crush location for the daemons Before you enter the stretch mode, you need to prepare the cluster by setting the crush location to the daemons in the Red Hat Ceph Storage cluster. There are two ways to do this: Bootstrap the cluster through a service configuration file, where the locations are added to the hosts as part of deployment. Set the locations manually through ceph osd crush add-bucket and ceph osd crush move commands after the cluster is deployed. Method 1: Bootstrapping the cluster Prerequisites Root-level access to the nodes. Procedure If you are bootstrapping your new storage cluster, you can create the service configuration .yaml file that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run: Example Bootstrap the storage cluster with the --apply-spec option: Syntax Example Important You can use different command options with the cephadm bootstrap command. However, always include the --apply-spec option to use the service configuration file and configure the host locations. Additional Resources See Bootstrapping a new storage cluster for more information about Ceph bootstrapping and different cephadm bootstrap command options. Method 2: Setting the locations after the deployment Prerequisites Root-level access to the nodes. Procedure Add two buckets to which you plan to set the location of your non-tiebreaker monitors to the CRUSH map, specifying the bucket type as as datacenter : Syntax Example Move the buckets under root=default : Syntax Example Move the OSD hosts according to the required CRUSH placement: Syntax Example 4.1.2. Entering the stretch mode The new stretch mode is designed to handle two sites. There is a lower risk of component availability outages with 2-site clusters. Prerequisites Root-level access to the nodes. The crush location is set to the hosts. Procedure Set the location of each monitor, matching your CRUSH map: Syntax Example Generate a CRUSH rule which places two copies on each data center: Syntax Example Edit the decompiled CRUSH map file to add a new rule: Example 1 The rule id has to be unique. In this example, there is only one more rule with id 0 , thereby the id 1 is used, however you might need to use a different rule ID depending on the number of existing rules. 2 3 In this example, there are two data center buckets named DC1 and DC2 . Note This rule makes the cluster have read-affinity towards data center DC1 . Therefore, all the reads or writes happen through Ceph OSDs placed in DC1 . If this is not desirable, and reads or writes are to be distributed evenly across the zones, the crush rule is the following: Example In this rule, the data center is selected randomly and automatically. See CRUSH rules for more information on firstn and indep options. Inject the CRUSH map to make the rule available to the cluster: Syntax Example If you do not run the monitors in connectivity mode, set the election strategy to connectivity : Example Enter stretch mode by setting the location of the tiebreaker monitor to split across the data centers: Syntax Example In this example the monitor mon.host07 is the tiebreaker. Important The location of the tiebreaker monitor should differ from the data centers to which you previously set the non-tiebreaker monitors. In the example above, it is data center DC3 . Important Do not add this data center to the CRUSH map as it results in the following error when you try to enter stretch mode: Note If you are writing your own tooling for deploying Ceph, you can use a new --set-crush-location option when booting monitors, instead of running the ceph mon set_location command. This option accepts only a single bucket=location pair, for example ceph-mon --set-crush-location 'datacenter=DC1' , which must match the bucket type you specified when running the enable_stretch_mode command. Verify that the stretch mode is enabled successfully: Example The stretch_mode_enabled should be set to true . You can also see the number of stretch buckets, stretch mode buckets, and if the stretch mode is degraded or recovering. Verify that the monitors are in an appropriate locations: Example You can also see which monitor is the tiebreaker, and the monitor election strategy. Additional Resources See Configuring monitor election strategy for more information about monitor election strategy. 4.1.3. Adding OSD hosts in stretch mode You can add Ceph OSDs in the stretch mode. The procedure is similar to the addition of the OSD hosts on a cluster where stretch mode is not enabled. Prerequisites A running Red Hat Ceph Storage cluster. Stretch mode in enabled on a cluster. Root-level access to the nodes. Procedure List the available devices to deploy OSDs: Syntax Example Deploy the OSDs on specific hosts or on all the available devices: Create an OSD from a specific device on a specific host: Syntax Example Deploy OSDs on any available and unused devices: Important This command creates collocated WAL and DB devices. If you want to create non-collocated devices, do not use this command. Example Move the OSD hosts under the CRUSH bucket: Syntax Example Note Ensure you add the same topology nodes on both sites. Issues might arise if hosts are added only on one site. Additional Resources See Adding OSDs for more information about the addition of Ceph OSDs. | [
"ceph mon enable_stretch_mode host05 stretch_rule datacenter Error EINVAL: the 2 datacenter instances in the cluster have differing weights 25947 and 15728 but stretch mode currently requires they be the same!",
"service_type: host addr: host01 hostname: host01 location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: host02 hostname: host02 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: host03 hostname: host03 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: host04 hostname: host04 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: host05 hostname: host05 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: host06 hostname: host06 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: host07 hostname: host07 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080",
"cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url REGISTRY_URL --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"ceph osd crush add-bucket BUCKET_NAME BUCKET_TYPE",
"ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter",
"ceph osd crush move BUCKET_NAME root=default",
"ceph osd crush move DC1 root=default ceph osd crush move DC2 root=default",
"ceph osd crush move HOST datacenter= DATACENTER",
"ceph osd crush move host01 datacenter=DC1",
"ceph mon set_location HOST datacenter= DATACENTER",
"ceph mon set_location host01 datacenter=DC1 ceph mon set_location host02 datacenter=DC1 ceph mon set_location host04 datacenter=DC2 ceph mon set_location host05 datacenter=DC2 ceph mon set_location host07 datacenter=DC3",
"ceph osd getcrushmap > COMPILED_CRUSHMAP_FILENAME crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME",
"ceph osd getcrushmap > crush.map.bin crushtool -d crush.map.bin -o crush.map.txt",
"rule stretch_rule { id 1 1 type replicated min_size 1 max_size 10 step take DC1 2 step chooseleaf firstn 2 type host step emit step take DC2 3 step chooseleaf firstn 2 type host step emit }",
"rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit }",
"crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME",
"crushtool -c crush.map.txt -o crush2.map.bin ceph osd setcrushmap -i crush2.map.bin",
"ceph mon set election_strategy connectivity",
"ceph mon set_location HOST datacenter= DATACENTER ceph mon enable_stretch_mode HOST stretch_rule datacenter",
"ceph mon set_location host07 datacenter=DC3 ceph mon enable_stretch_mode host07 stretch_rule datacenter",
"Error EINVAL: there are 3 datacenters in the cluster but stretch mode currently only works with 2!",
"ceph osd dump epoch 361 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d created 2023-01-16T05:47:28.482717+0000 modified 2023-01-17T17:36:50.066183+0000 flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit crush_version 31 full_ratio 0.95 backfillfull_ratio 0.92 nearfull_ratio 0.85 require_min_compat_client luminous min_compat_client luminous require_osd_release quincy stretch_mode_enabled true stretch_bucket_count 2 degraded_stretch_mode 0 recovering_stretch_mode 0 stretch_mode_bucket 8",
"ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host07 disallowed_leaders host07 0: [v2:132.224.169.63:3300/0,v1:132.224.169.63:6789/0] mon.host07; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host02; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host05; crush_location {datacenter=DC2} dumped monmap epoch 19",
"ceph orch device ls [--hostname= HOST_1 HOST_2 ] [--wide] [--refresh]",
"ceph orch device ls",
"ceph orch daemon add osd HOST : DEVICE_PATH",
"ceph orch daemon add osd host03:/dev/sdb",
"ceph orch apply osd --all-available-devices",
"ceph osd crush move HOST datacenter= DATACENTER",
"ceph osd crush move host03 datacenter=DC1 ceph osd crush move host06 datacenter=DC2"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/administration_guide/stretch-clusters-for-ceph-storage |
Pipelines | Pipelines OpenShift Container Platform 4.15 A cloud-native continuous integration and continuous delivery solution based on Kubernetes resources Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/pipelines/index |
Chapter 10. Debezium Server (Developer Preview) | Chapter 10. Debezium Server (Developer Preview) Debezium Server is a ready-to-use application that streams change events from a data source directly to a configured data sink without relying on an Apache Kafka Connect infrastructure. Important Debezium Server is Developer Preview software only. Developer Preview software is not supported by Red Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA. For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope . 10.1. Debezium Server sink destinations For this Developer Preview release of Debezium Server, you can configure the following sink destinations: Apache Kafka Redis Streams Note The Debezium community documentation refers to the use of several other data sinks. For this Developer Preview release, configure Debezium Server to use one of the data sinks in the preceding list. 10.2. Change event streaming with Debezium Server and an Apache Kafka event sink vs. with Kafka Connect and the Debezium connectors Debezium Server includes connectors that can send event data directly to a Kafka sink. It does not require you to deploy connectors or deploy a Kafka Connect cluster. If you already run an Apache Kafka cluster, you can use that cluster to test Debezium Server by configuring the cluster as a sink destination. Note Running Debezium Server with a Kafka sink does not provide some features that are available when you run the Debezium source connectors on Kafka Connect. For example, a Debezium Server connector can use only a single task to write to a data sink, whereas, most connectors that run on Kafka Connect can use multiple tasks. Also, Debezium Server does not provide advanced features, such as customized automatic topic creation. 10.3. Deploying and using Debezium Server You can obtain the Developer Preview version of Debezium Server from the Software Downloads on the Red Hat Customer Portal. For information about how to deploy and use Debezium Server, see the Debezium community documentation . Additional resources For information about deploying Debezium connectors to Kafka Connect, see the source connector documentation . Revised on 2024-10-10 19:36:09 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/debezium_user_guide/debezium-server |
Chapter 86. DockerOutput schema reference | Chapter 86. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Description image The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. string pushSecret Container Registry Secret with the credentials for pushing the newly built image. string additionalKanikoOptions Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. string array type Must be docker . string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-dockeroutput-reference |
Chapter 38. NetworkPolicyService | Chapter 38. NetworkPolicyService 38.1. GetAllowedPeersFromCurrentPolicyForDeployment GET /v1/networkpolicies/allowedpeers/{id} 38.1.1. Description 38.1.2. Parameters 38.1.2.1. Path Parameters Name Description Required Default Pattern id X null 38.1.3. Return Type V1GetAllowedPeersFromCurrentPolicyForDeploymentResponse 38.1.4. Content Type application/json 38.1.5. Responses Table 38.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAllowedPeersFromCurrentPolicyForDeploymentResponse 0 An unexpected error response. GooglerpcStatus 38.1.6. Samples 38.1.7. Common object reference 38.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.1.7.3. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 38.1.7.4. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 38.1.7.5. V1GetAllowedPeersFromCurrentPolicyForDeploymentResponse Field Name Required Nullable Type Description Format allowedPeers List of V1NetworkBaselineStatusPeer 38.1.7.6. V1NetworkBaselinePeerEntity Field Name Required Nullable Type Description Format id String type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, 38.1.7.7. V1NetworkBaselineStatusPeer Field Name Required Nullable Type Description Format entity V1NetworkBaselinePeerEntity port Long The port and protocol of the destination of the given connection. int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, ingress Boolean A boolean representing whether the query is for an ingress or egress connection. This is defined with respect to the current deployment. Thus: - If the connection in question is in the outEdges of the current deployment, this should be false. - If it is in the outEdges of the peer deployment, this should be true. 38.2. ApplyNetworkPolicy POST /v1/networkpolicies/apply/{clusterId} 38.2.1. Description 38.2.2. Parameters 38.2.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 38.2.2.2. Body Parameter Name Description Required Default Pattern modification StorageNetworkPolicyModification X 38.2.3. Return Type Object 38.2.4. Content Type application/json 38.2.5. Responses Table 38.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 38.2.6. Samples 38.2.7. Common object reference 38.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.2.7.3. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.2.7.4. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.3. ApplyNetworkPolicyYamlForDeployment POST /v1/networkpolicies/apply/deployment/{deploymentId} 38.3.1. Description 38.3.2. Parameters 38.3.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 38.3.2.2. Body Parameter Name Description Required Default Pattern body NetworkPolicyServiceApplyNetworkPolicyYamlForDeploymentBody X 38.3.3. Return Type Object 38.3.4. Content Type application/json 38.3.5. Responses Table 38.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 38.3.6. Samples 38.3.7. Common object reference 38.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.3.7.2. NetworkPolicyServiceApplyNetworkPolicyYamlForDeploymentBody Field Name Required Nullable Type Description Format modification StorageNetworkPolicyModification 38.3.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.3.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.3.7.4. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.3.7.5. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.4. GetDiffFlowsBetweenPolicyAndBaselineForDeployment GET /v1/networkpolicies/baselinecomparison/{id} 38.4.1. Description 38.4.2. Parameters 38.4.2.1. Path Parameters Name Description Required Default Pattern id X null 38.4.3. Return Type V1GetDiffFlowsResponse 38.4.4. Content Type application/json 38.4.5. Responses Table 38.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetDiffFlowsResponse 0 An unexpected error response. GooglerpcStatus 38.4.6. Samples 38.4.7. Common object reference 38.4.7.1. DeploymentListenPort Field Name Required Nullable Type Description Format port Long int64 l4protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 38.4.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.4.7.3. NetworkEntityInfoExternalSource Update normalizeDupNameExtSrcs(... ) in central/networkgraph/aggregator/aggregator.go whenever this message is updated. Field Name Required Nullable Type Description Format name String cidr String default Boolean default indicates whether the external source is user-generated or system-generated. discovered Boolean discovered indicates whether the external source is harvested from monitored traffic. 38.4.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.4.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.4.7.5. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 38.4.7.6. StorageNetworkBaselineConnectionProperties Field Name Required Nullable Type Description Format ingress Boolean port Long int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 38.4.7.7. StorageNetworkEntityInfo Field Name Required Nullable Type Description Format type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, id String deployment StorageNetworkEntityInfoDeployment externalSource NetworkEntityInfoExternalSource 38.4.7.8. StorageNetworkEntityInfoDeployment Field Name Required Nullable Type Description Format name String namespace String cluster String listenPorts List of DeploymentListenPort 38.4.7.9. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 38.4.7.10. V1GetDiffFlowsGroupedFlow Field Name Required Nullable Type Description Format entity StorageNetworkEntityInfo properties List of StorageNetworkBaselineConnectionProperties 38.4.7.11. V1GetDiffFlowsReconciledFlow Field Name Required Nullable Type Description Format entity StorageNetworkEntityInfo added List of StorageNetworkBaselineConnectionProperties removed List of StorageNetworkBaselineConnectionProperties unchanged List of StorageNetworkBaselineConnectionProperties 38.4.7.12. V1GetDiffFlowsResponse Field Name Required Nullable Type Description Format added List of V1GetDiffFlowsGroupedFlow removed List of V1GetDiffFlowsGroupedFlow reconciled List of V1GetDiffFlowsReconciledFlow 38.5. GetNetworkGraph GET /v1/networkpolicies/cluster/{clusterId} 38.5.1. Description 38.5.2. Parameters 38.5.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 38.5.2.2. Query Parameters Name Description Required Default Pattern query - null includePorts If set to true, include port-level information in the network policy graph. - null scope.query - null 38.5.3. Return Type V1NetworkGraph 38.5.4. Content Type application/json 38.5.5. Responses Table 38.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1NetworkGraph 0 An unexpected error response. GooglerpcStatus 38.5.6. Samples 38.5.7. Common object reference 38.5.7.1. DeploymentListenPort Field Name Required Nullable Type Description Format port Long int64 l4protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 38.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.5.7.3. NetworkEntityInfoExternalSource Update normalizeDupNameExtSrcs(... ) in central/networkgraph/aggregator/aggregator.go whenever this message is updated. Field Name Required Nullable Type Description Format name String cidr String default Boolean default indicates whether the external source is user-generated or system-generated. discovered Boolean discovered indicates whether the external source is harvested from monitored traffic. 38.5.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.5.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.5.7.5. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 38.5.7.6. StorageNetworkEntityInfo Field Name Required Nullable Type Description Format type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, id String deployment StorageNetworkEntityInfoDeployment externalSource NetworkEntityInfoExternalSource 38.5.7.7. StorageNetworkEntityInfoDeployment Field Name Required Nullable Type Description Format name String namespace String cluster String listenPorts List of DeploymentListenPort 38.5.7.8. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 38.5.7.9. V1NetworkEdgeProperties Field Name Required Nullable Type Description Format port Long int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, lastActiveTimestamp Date date-time 38.5.7.10. V1NetworkEdgePropertiesBundle Field Name Required Nullable Type Description Format properties List of V1NetworkEdgeProperties 38.5.7.11. V1NetworkGraph Field Name Required Nullable Type Description Format epoch Long int64 nodes List of V1NetworkNode 38.5.7.12. V1NetworkNode Field Name Required Nullable Type Description Format entity StorageNetworkEntityInfo internetAccess Boolean policyIds List of string nonIsolatedIngress Boolean nonIsolatedEgress Boolean queryMatch Boolean outEdges Map of V1NetworkEdgePropertiesBundle 38.6. GetBaselineGeneratedNetworkPolicyForDeployment POST /v1/networkpolicies/generate/baseline/{deploymentId} 38.6.1. Description 38.6.2. Parameters 38.6.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 38.6.2.2. Body Parameter Name Description Required Default Pattern body NetworkPolicyServiceGetBaselineGeneratedNetworkPolicyForDeploymentBody X 38.6.3. Return Type V1GetBaselineGeneratedPolicyForDeploymentResponse 38.6.4. Content Type application/json 38.6.5. Responses Table 38.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetBaselineGeneratedPolicyForDeploymentResponse 0 An unexpected error response. GooglerpcStatus 38.6.6. Samples 38.6.7. Common object reference 38.6.7.1. GenerateNetworkPoliciesRequestDeleteExistingPoliciesMode NONE: Do not delete any existing network policies. GENERATED_ONLY: Delete any existing auto-generated network policies. ALL: Delete all existing network policies in the respective namespace. Enum Values UNKNOWN NONE GENERATED_ONLY ALL 38.6.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.6.7.3. NetworkPolicyServiceGetBaselineGeneratedNetworkPolicyForDeploymentBody Field Name Required Nullable Type Description Format deleteExisting GenerateNetworkPoliciesRequestDeleteExistingPoliciesMode UNKNOWN, NONE, GENERATED_ONLY, ALL, includePorts Boolean 38.6.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.6.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.6.7.5. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.6.7.6. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.6.7.7. V1GetBaselineGeneratedPolicyForDeploymentResponse Field Name Required Nullable Type Description Format modification StorageNetworkPolicyModification 38.7. GenerateNetworkPolicies GET /v1/networkpolicies/generate/{clusterId} 38.7.1. Description 38.7.2. Parameters 38.7.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 38.7.2.2. Query Parameters Name Description Required Default Pattern query - null deleteExisting - NONE: Do not delete any existing network policies. - GENERATED_ONLY: Delete any existing auto-generated network policies. - ALL: Delete all existing network policies in the respective namespace. - UNKNOWN networkDataSince - null includePorts - null 38.7.3. Return Type V1GenerateNetworkPoliciesResponse 38.7.4. Content Type application/json 38.7.5. Responses Table 38.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1GenerateNetworkPoliciesResponse 0 An unexpected error response. GooglerpcStatus 38.7.6. Samples 38.7.7. Common object reference 38.7.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.7.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.7.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.7.7.3. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.7.7.4. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.7.7.5. V1GenerateNetworkPoliciesResponse Field Name Required Nullable Type Description Format modification StorageNetworkPolicyModification 38.8. GetNetworkPolicies GET /v1/networkpolicies 38.8.1. Description 38.8.2. Parameters 38.8.2.1. Query Parameters Name Description Required Default Pattern clusterId - null deploymentQuery - null namespace - null 38.8.3. Return Type V1NetworkPoliciesResponse 38.8.4. Content Type application/json 38.8.5. Responses Table 38.8. HTTP Response Codes Code Message Datatype 200 A successful response. V1NetworkPoliciesResponse 0 An unexpected error response. GooglerpcStatus 38.8.6. Samples 38.8.7. Common object reference 38.8.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.8.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.8.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.8.7.3. StorageIPBlock Field Name Required Nullable Type Description Format cidr String except List of string 38.8.7.4. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 38.8.7.5. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 38.8.7.6. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 38.8.7.7. StorageNetworkPolicy Field Name Required Nullable Type Description Format id String name String clusterId String clusterName String namespace String labels Map of string annotations Map of string spec StorageNetworkPolicySpec yaml String apiVersion String created Date date-time 38.8.7.8. StorageNetworkPolicyEgressRule Field Name Required Nullable Type Description Format ports List of StorageNetworkPolicyPort to List of StorageNetworkPolicyPeer 38.8.7.9. StorageNetworkPolicyIngressRule Field Name Required Nullable Type Description Format ports List of StorageNetworkPolicyPort from List of StorageNetworkPolicyPeer 38.8.7.10. StorageNetworkPolicyPeer Field Name Required Nullable Type Description Format podSelector StorageLabelSelector namespaceSelector StorageLabelSelector ipBlock StorageIPBlock 38.8.7.11. StorageNetworkPolicyPort Field Name Required Nullable Type Description Format protocol StorageProtocol UNSET_PROTOCOL, TCP_PROTOCOL, UDP_PROTOCOL, SCTP_PROTOCOL, port Integer int32 portName String 38.8.7.12. StorageNetworkPolicySpec Field Name Required Nullable Type Description Format podSelector StorageLabelSelector ingress List of StorageNetworkPolicyIngressRule egress List of StorageNetworkPolicyEgressRule policyTypes List of StorageNetworkPolicyType 38.8.7.13. StorageNetworkPolicyType Enum Values UNSET_NETWORK_POLICY_TYPE INGRESS_NETWORK_POLICY_TYPE EGRESS_NETWORK_POLICY_TYPE 38.8.7.14. StorageProtocol Enum Values UNSET_PROTOCOL TCP_PROTOCOL UDP_PROTOCOL SCTP_PROTOCOL 38.8.7.15. V1NetworkPoliciesResponse Field Name Required Nullable Type Description Format networkPolicies List of StorageNetworkPolicy 38.9. GetNetworkGraphEpoch GET /v1/networkpolicies/graph/epoch 38.9.1. Description 38.9.2. Parameters 38.9.2.1. Query Parameters Name Description Required Default Pattern clusterId - null 38.9.3. Return Type V1NetworkGraphEpoch 38.9.4. Content Type application/json 38.9.5. Responses Table 38.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1NetworkGraphEpoch 0 An unexpected error response. GooglerpcStatus 38.9.6. Samples 38.9.7. Common object reference 38.9.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.9.7.3. V1NetworkGraphEpoch Field Name Required Nullable Type Description Format epoch Long int64 38.10. GetNetworkPolicy GET /v1/networkpolicies/{id} 38.10.1. Description 38.10.2. Parameters 38.10.2.1. Path Parameters Name Description Required Default Pattern id X null 38.10.3. Return Type StorageNetworkPolicy 38.10.4. Content Type application/json 38.10.5. Responses Table 38.10. HTTP Response Codes Code Message Datatype 200 A successful response. StorageNetworkPolicy 0 An unexpected error response. GooglerpcStatus 38.10.6. Samples 38.10.7. Common object reference 38.10.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.10.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.10.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.10.7.3. StorageIPBlock Field Name Required Nullable Type Description Format cidr String except List of string 38.10.7.4. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 38.10.7.5. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 38.10.7.6. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 38.10.7.7. StorageNetworkPolicy Field Name Required Nullable Type Description Format id String name String clusterId String clusterName String namespace String labels Map of string annotations Map of string spec StorageNetworkPolicySpec yaml String apiVersion String created Date date-time 38.10.7.8. StorageNetworkPolicyEgressRule Field Name Required Nullable Type Description Format ports List of StorageNetworkPolicyPort to List of StorageNetworkPolicyPeer 38.10.7.9. StorageNetworkPolicyIngressRule Field Name Required Nullable Type Description Format ports List of StorageNetworkPolicyPort from List of StorageNetworkPolicyPeer 38.10.7.10. StorageNetworkPolicyPeer Field Name Required Nullable Type Description Format podSelector StorageLabelSelector namespaceSelector StorageLabelSelector ipBlock StorageIPBlock 38.10.7.11. StorageNetworkPolicyPort Field Name Required Nullable Type Description Format protocol StorageProtocol UNSET_PROTOCOL, TCP_PROTOCOL, UDP_PROTOCOL, SCTP_PROTOCOL, port Integer int32 portName String 38.10.7.12. StorageNetworkPolicySpec Field Name Required Nullable Type Description Format podSelector StorageLabelSelector ingress List of StorageNetworkPolicyIngressRule egress List of StorageNetworkPolicyEgressRule policyTypes List of StorageNetworkPolicyType 38.10.7.13. StorageNetworkPolicyType Enum Values UNSET_NETWORK_POLICY_TYPE INGRESS_NETWORK_POLICY_TYPE EGRESS_NETWORK_POLICY_TYPE 38.10.7.14. StorageProtocol Enum Values UNSET_PROTOCOL TCP_PROTOCOL UDP_PROTOCOL SCTP_PROTOCOL 38.11. SendNetworkPolicyYAML POST /v1/networkpolicies/simulate/{clusterId}/notify 38.11.1. Description 38.11.2. Parameters 38.11.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 38.11.2.2. Body Parameter Name Description Required Default Pattern modification StorageNetworkPolicyModification X 38.11.2.3. Query Parameters Name Description Required Default Pattern notifierIds String - null 38.11.3. Return Type Object 38.11.4. Content Type application/json 38.11.5. Responses Table 38.11. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 38.11.6. Samples 38.11.7. Common object reference 38.11.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.11.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.11.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.11.7.3. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.11.7.4. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.12. SimulateNetworkGraph POST /v1/networkpolicies/simulate/{clusterId} 38.12.1. Description 38.12.2. Parameters 38.12.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 38.12.2.2. Body Parameter Name Description Required Default Pattern modification StorageNetworkPolicyModification X 38.12.2.3. Query Parameters Name Description Required Default Pattern query - null includePorts If set to true, include port-level information in the network policy graph. - null includeNodeDiff - null scope.query - null 38.12.3. Return Type V1SimulateNetworkGraphResponse 38.12.4. Content Type application/json 38.12.5. Responses Table 38.12. HTTP Response Codes Code Message Datatype 200 A successful response. V1SimulateNetworkGraphResponse 0 An unexpected error response. GooglerpcStatus 38.12.6. Samples 38.12.7. Common object reference 38.12.7.1. DeploymentListenPort Field Name Required Nullable Type Description Format port Long int64 l4protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 38.12.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.12.7.3. NetworkEntityInfoExternalSource Update normalizeDupNameExtSrcs(... ) in central/networkgraph/aggregator/aggregator.go whenever this message is updated. Field Name Required Nullable Type Description Format name String cidr String default Boolean default indicates whether the external source is user-generated or system-generated. discovered Boolean discovered indicates whether the external source is harvested from monitored traffic. 38.12.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.12.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.12.7.5. StorageIPBlock Field Name Required Nullable Type Description Format cidr String except List of string 38.12.7.6. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 38.12.7.7. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 38.12.7.8. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 38.12.7.9. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 38.12.7.10. StorageNetworkEntityInfo Field Name Required Nullable Type Description Format type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, id String deployment StorageNetworkEntityInfoDeployment externalSource NetworkEntityInfoExternalSource 38.12.7.11. StorageNetworkEntityInfoDeployment Field Name Required Nullable Type Description Format name String namespace String cluster String listenPorts List of DeploymentListenPort 38.12.7.12. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 38.12.7.13. StorageNetworkPolicy Field Name Required Nullable Type Description Format id String name String clusterId String clusterName String namespace String labels Map of string annotations Map of string spec StorageNetworkPolicySpec yaml String apiVersion String created Date date-time 38.12.7.14. StorageNetworkPolicyEgressRule Field Name Required Nullable Type Description Format ports List of StorageNetworkPolicyPort to List of StorageNetworkPolicyPeer 38.12.7.15. StorageNetworkPolicyIngressRule Field Name Required Nullable Type Description Format ports List of StorageNetworkPolicyPort from List of StorageNetworkPolicyPeer 38.12.7.16. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.12.7.17. StorageNetworkPolicyPeer Field Name Required Nullable Type Description Format podSelector StorageLabelSelector namespaceSelector StorageLabelSelector ipBlock StorageIPBlock 38.12.7.18. StorageNetworkPolicyPort Field Name Required Nullable Type Description Format protocol StorageProtocol UNSET_PROTOCOL, TCP_PROTOCOL, UDP_PROTOCOL, SCTP_PROTOCOL, port Integer int32 portName String 38.12.7.19. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.12.7.20. StorageNetworkPolicySpec Field Name Required Nullable Type Description Format podSelector StorageLabelSelector ingress List of StorageNetworkPolicyIngressRule egress List of StorageNetworkPolicyEgressRule policyTypes List of StorageNetworkPolicyType 38.12.7.21. StorageNetworkPolicyType Enum Values UNSET_NETWORK_POLICY_TYPE INGRESS_NETWORK_POLICY_TYPE EGRESS_NETWORK_POLICY_TYPE 38.12.7.22. StorageProtocol Enum Values UNSET_PROTOCOL TCP_PROTOCOL UDP_PROTOCOL SCTP_PROTOCOL 38.12.7.23. V1NetworkEdgeProperties Field Name Required Nullable Type Description Format port Long int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, lastActiveTimestamp Date date-time 38.12.7.24. V1NetworkEdgePropertiesBundle Field Name Required Nullable Type Description Format properties List of V1NetworkEdgeProperties 38.12.7.25. V1NetworkGraph Field Name Required Nullable Type Description Format epoch Long int64 nodes List of V1NetworkNode 38.12.7.26. V1NetworkGraphDiff Field Name Required Nullable Type Description Format DEPRECATEDNodeDiffs Map of V1NetworkNodeDiff nodeDiffs Map of V1NetworkNodeDiff 38.12.7.27. V1NetworkNode Field Name Required Nullable Type Description Format entity StorageNetworkEntityInfo internetAccess Boolean policyIds List of string nonIsolatedIngress Boolean nonIsolatedEgress Boolean queryMatch Boolean outEdges Map of V1NetworkEdgePropertiesBundle 38.12.7.28. V1NetworkNodeDiff Field Name Required Nullable Type Description Format policyIds List of string DEPRECATEDOutEdges Map of V1NetworkEdgePropertiesBundle outEdges Map of V1NetworkEdgePropertiesBundle nonIsolatedIngress Boolean nonIsolatedEgress Boolean 38.12.7.29. V1NetworkPolicyInSimulation Field Name Required Nullable Type Description Format policy StorageNetworkPolicy status V1NetworkPolicyInSimulationStatus INVALID, UNCHANGED, MODIFIED, ADDED, DELETED, oldPolicy StorageNetworkPolicy 38.12.7.30. V1NetworkPolicyInSimulationStatus Enum Values INVALID UNCHANGED MODIFIED ADDED DELETED 38.12.7.31. V1SimulateNetworkGraphResponse Field Name Required Nullable Type Description Format simulatedGraph V1NetworkGraph policies List of V1NetworkPolicyInSimulation added V1NetworkGraphDiff removed V1NetworkGraphDiff 38.13. GetDiffFlowsFromUndoModificationForDeployment GET /v1/networkpolicies/undobaselinecomparison/{id} 38.13.1. Description 38.13.2. Parameters 38.13.2.1. Path Parameters Name Description Required Default Pattern id X null 38.13.3. Return Type V1GetDiffFlowsResponse 38.13.4. Content Type application/json 38.13.5. Responses Table 38.13. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetDiffFlowsResponse 0 An unexpected error response. GooglerpcStatus 38.13.6. Samples 38.13.7. Common object reference 38.13.7.1. DeploymentListenPort Field Name Required Nullable Type Description Format port Long int64 l4protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 38.13.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.13.7.3. NetworkEntityInfoExternalSource Update normalizeDupNameExtSrcs(... ) in central/networkgraph/aggregator/aggregator.go whenever this message is updated. Field Name Required Nullable Type Description Format name String cidr String default Boolean default indicates whether the external source is user-generated or system-generated. discovered Boolean discovered indicates whether the external source is harvested from monitored traffic. 38.13.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.13.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.13.7.5. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 38.13.7.6. StorageNetworkBaselineConnectionProperties Field Name Required Nullable Type Description Format ingress Boolean port Long int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 38.13.7.7. StorageNetworkEntityInfo Field Name Required Nullable Type Description Format type StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, id String deployment StorageNetworkEntityInfoDeployment externalSource NetworkEntityInfoExternalSource 38.13.7.8. StorageNetworkEntityInfoDeployment Field Name Required Nullable Type Description Format name String namespace String cluster String listenPorts List of DeploymentListenPort 38.13.7.9. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 38.13.7.10. V1GetDiffFlowsGroupedFlow Field Name Required Nullable Type Description Format entity StorageNetworkEntityInfo properties List of StorageNetworkBaselineConnectionProperties 38.13.7.11. V1GetDiffFlowsReconciledFlow Field Name Required Nullable Type Description Format entity StorageNetworkEntityInfo added List of StorageNetworkBaselineConnectionProperties removed List of StorageNetworkBaselineConnectionProperties unchanged List of StorageNetworkBaselineConnectionProperties 38.13.7.12. V1GetDiffFlowsResponse Field Name Required Nullable Type Description Format added List of V1GetDiffFlowsGroupedFlow removed List of V1GetDiffFlowsGroupedFlow reconciled List of V1GetDiffFlowsReconciledFlow 38.14. GetUndoModification GET /v1/networkpolicies/undo/{clusterId} 38.14.1. Description 38.14.2. Parameters 38.14.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 38.14.3. Return Type V1GetUndoModificationResponse 38.14.4. Content Type application/json 38.14.5. Responses Table 38.14. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetUndoModificationResponse 0 An unexpected error response. GooglerpcStatus 38.14.6. Samples 38.14.7. Common object reference 38.14.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.14.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.14.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.14.7.3. StorageNetworkPolicyApplicationUndoRecord Field Name Required Nullable Type Description Format clusterId String user String applyTimestamp Date date-time originalModification StorageNetworkPolicyModification undoModification StorageNetworkPolicyModification 38.14.7.4. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.14.7.5. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.14.7.6. V1GetUndoModificationResponse Field Name Required Nullable Type Description Format undoRecord StorageNetworkPolicyApplicationUndoRecord 38.15. GetUndoModificationForDeployment GET /v1/networkpolicies/undo/deployment/{id} 38.15.1. Description 38.15.2. Parameters 38.15.2.1. Path Parameters Name Description Required Default Pattern id X null 38.15.3. Return Type V1GetUndoModificationForDeploymentResponse 38.15.4. Content Type application/json 38.15.5. Responses Table 38.15. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetUndoModificationForDeploymentResponse 0 An unexpected error response. GooglerpcStatus 38.15.6. Samples 38.15.7. Common object reference 38.15.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 38.15.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 38.15.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 38.15.7.3. StorageNetworkPolicyApplicationUndoRecord Field Name Required Nullable Type Description Format clusterId String user String applyTimestamp Date date-time originalModification StorageNetworkPolicyModification undoModification StorageNetworkPolicyModification 38.15.7.4. StorageNetworkPolicyModification Field Name Required Nullable Type Description Format applyYaml String toDelete List of StorageNetworkPolicyReference 38.15.7.5. StorageNetworkPolicyReference Field Name Required Nullable Type Description Format namespace String name String 38.15.7.6. V1GetUndoModificationForDeploymentResponse Field Name Required Nullable Type Description Format undoRecord StorageNetworkPolicyApplicationUndoRecord | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"NetworkBaselineConnectionProperties represents information about a baseline connection next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3",
"Next available tag: 2",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"Next available tag: 3",
"Next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"NetworkBaselineConnectionProperties represents information about a baseline connection next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 3",
"Next available tag: 3"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/networkpolicyservice |
3.7. Hardening TLS Configuration | 3.7. Hardening TLS Configuration TLS ( Transport Layer Security ) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols , authentication methods , and encryption algorithms , it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to a limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. Note that the default settings provided by libraries included in Red Hat Enterprise Linux are secure enough for most deployments. The TLS implementations use secure algorithms where possible while not preventing connections from or to legacy clients or servers. Apply the hardened settings described in this section in environments with strict security requirements where legacy clients or servers that do not support secure algorithms or protocols are not expected or allowed to connect. 3.7.1. Choosing Algorithms to Enable There are several components that need to be selected and configured. Each of the following directly influences the robustness of the resulting configuration (and, consequently, the level of support in clients) or the computational demands that the solution has on the system. Protocol Versions The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to include support for older versions of TLS (or even SSL ), allow your systems to negotiate connections using only the latest version of TLS . Do not allow negotiation using SSL version 2 or 3. Both of those versions have serious security vulnerabilities. Only allow negotiation using TLS version 1.0 or higher. The current version of TLS , 1.2, should always be preferred. Note Please note that currently, the security of all versions of TLS depends on the use of TLS extensions, specific ciphers (see below), and other workarounds. All TLS connection peers need to implement secure renegotiation indication ( RFC 5746 ), must not support compression, and must implement mitigating measures for timing attacks against CBC -mode ciphers (the Lucky Thirteen attack). TLS v1.0 clients need to additionally implement record splitting (a workaround against the BEAST attack). TLS v1.2 supports Authenticated Encryption with Associated Data ( AEAD ) mode ciphers like AES-GCM , AES-CCM , or Camellia-GCM , which have no known issues. All the mentioned mitigations are implemented in cryptographic libraries included in Red Hat Enterprise Linux. See Table 3.1, "Protocol Versions" for a quick overview of protocol versions and recommended usage. Table 3.1. Protocol Versions Protocol Version Usage Recommendation SSL v2 Do not use. Has serious security vulnerabilities. SSL v3 Do not use. Has serious security vulnerabilities. TLS v1.0 Use for interoperability purposes where needed. Has known issues that cannot be mitigated in a way that guarantees interoperability, and thus mitigations are not enabled by default. Does not support modern cipher suites. TLS v1.1 Use for interoperability purposes where needed. Has no known issues but relies on protocol fixes that are included in all the TLS implementations in Red Hat Enterprise Linux. Does not support modern cipher suites. TLS v1.2 Recommended version. Supports the modern AEAD cipher suites. Some components in Red Hat Enterprise Linux are configured to use TLS v1.0 even though they provide support for TLS v1.1 or even v1.2 . This is motivated by an attempt to achieve the highest level of interoperability with external services that may not support the latest versions of TLS . Depending on your interoperability requirements, enable the highest available version of TLS . Important SSL v3 is not recommended for use. However, if, despite the fact that it is considered insecure and unsuitable for general use, you absolutely must leave SSL v3 enabled, see Section 3.6, "Using stunnel" for instructions on how to use stunnel to securely encrypt communications even when using services that do not support encryption or are only capable of using obsolete and insecure modes of encryption. While not immediately insecure, cipher suites that offer less than 128 bits of security should not be considered for their short useful life. Algorithms that use 128 bit of security or more can be expected to be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES ciphers advertise the use of 168 bits, they actually offer 112 bits of security. Always give preference to cipher suites that support (perfect) forward secrecy ( PFS ), which ensures the confidentiality of encrypted data even in case the server key is compromised. This rules out the fast RSA key exchange, but allows for the use of ECDHE and DHE . Of the two, ECDHE is the faster and therefore the preferred choice. Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even faster than pure RSA key exchange. To provide support for legacy clients, you can install two pairs of certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for legacy ones). Public Key Length When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which is sufficiently large for true 128 bits of security. Warning Keep in mind that the security of your system is only as strong as the weakest link in the chain. For example, a strong cipher alone does not guarantee good security. The keys and the certificates are just as important, as well as the hash functions and keys used by the Certification Authority ( CA ) to sign your keys. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-hardening_tls_configuration |
Chapter 3. Installing RHEL AI on AWS | Chapter 3. Installing RHEL AI on AWS There are multiple ways you can install and deploy Red Hat Enterprise Linux AI on AWS. You can purchase RHEL AI from the AWS marketplace . You can download the RHEL AI RAW file on the RHEL AI download page and convert it to an AWS image. For installing and deploying RHEL AI using the RAW file, you must first convert the RHEL AI image into an Amazon Machine Image (AMI). 3.1. Converting the RHEL AI image to an AWS AMI Before deploying RHEL AI on an AWS machine, you must set up a S3 bucket and convert the RHEL AI image to a AWS AMI. In the following process, you create the following resources: An S3 bucket with the RHEL AI image AWS EC2 snapshots An AWS AMI An AWS instance Prerequisites You have an Access Key ID configured in the AWS IAM account manager . Procedure Install the AWS command-line tool by following the AWS documentation You need to create a S3 bucket and set the permissions to allow image file conversion to AWS snapshots. Create the necessary environment variables by running the following commands: USD export BUCKET=<custom_bucket_name> USD export RAW_AMI=nvidia-bootc.ami USD export AMI_NAME="rhel-ai" USD export DEFAULT_VOLUME_SIZE=1000 Note On AWS, the DEFAULT_VOLUME_SIZE is measured GBs. You can create an S3 bucket by running the following command: USD aws s3 mb s3://USDBUCKET You must create a trust-policy.json file with the necessary configurations for generating a S3 role for your bucket: USD printf '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }' > trust-policy.json Create an S3 role for your bucket that you can name. In the following example command, vmiport is the name of the role. USD aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json You must create a role-policy.json file with the necessary configurations for generating a policy for your bucket: USD printf '{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::%s", "arn:aws:s3:::%s/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }' USDBUCKET USDBUCKET > role-policy.json Create a policy for your bucket by running the following command: USD aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json Now that your S3 bucket is set up, you need to download the RAW image from Red Hat Enterprise Linux AI download page Copy the RAW image link and add it to the following command: USD curl -Lo disk.raw <link-to-raw-file> Upload the image to the S3 bucket with the following command: USD aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI Convert the image to a snapshot and store it in the task_id variable name by running the following commands: USD printf '{ "Description": "my-image", "Format": "raw", "UserBucket": { "S3Bucket": "%s", "S3Key": "%s" } }' USDBUCKET USDRAW_AMI > containers.json USD task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId) You can check the progress of the disk image to snapshot conversion job with the following command: USD aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active Once the conversion job is complete, you can get the snapshot ID and store it in a variable called snapshot_id by running the following command: USD snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId=="'USD{task_id}'") | .SnapshotTaskDetail.SnapshotId') Add a tag name to the snapshot, so it's easier to identify, by running the following command: USD aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value="USDAMI_NAME" Register an AMI from the snapshot with the following command: USD ami_id=USD(aws ec2 register-image \ --name "USDAMI_NAME" \ --description "USDAMI_NAME" \ --architecture x86_64 \ --root-device-name /dev/sda1 \ --block-device-mappings "DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}" \ --virtualization-type hvm \ --ena-support \ | jq -r .ImageId) You can add another tag name to identify the AMI by running the following command: USD aws ec2 create-tags --resources USDami_id --tags Key=Name,Value="USDAMI_NAME" 3.2. Deploying your instance on AWS using the CLI You can launch the AWS instance with your new RHEL AI AMI from the AWS web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch your AWS instance with the custom AMI. If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI AMI. For more information, see "Converting the RHEL AI image to an AWS AMI". You have the AWS command-line tool installed and is properly configured with your aws_access_key_id and aws_secret_access_key. You configured your Virtual Private Cloud (VPC). You created a subnet for your instance. You created a SSH key-pair. You created a security group on AWS. Procedure For various parameters, you need to gather the ID of the variable. To access the image ID, run the following command: USD aws ec2 describe-images --owners self To access the security group ID, run the following command: USD aws ec2 describe-security-groups To access the subnet ID, run the following command: USD aws ec2 describe-subnets Populate environment variables for when you create the instance USD instance_name=rhel-ai-instance USD ami=<ami-id> USD instance_type=<instance-type-size> USD key_name=<key-pair-name> USD security_group=<sg-id> USD disk_size=<size-of-disk> Create your instance using the variables by running the following command: USD aws ec2 run-instances \ --image-id USDami \ --instance-type USDinstance_type \ --key-name USDkey_name \ --security-group-ids USDsecurity_group \ --subnet-id USDsubnet \ --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]' User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, you need to run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train | [
"export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000",
"aws s3 mb s3://USDBUCKET",
"printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json",
"aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json",
"printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json",
"curl -Lo disk.raw <link-to-raw-file>",
"aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI",
"printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json",
"task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)",
"aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active",
"snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId==\"'USD{task_id}'\") | .SnapshotTaskDetail.SnapshotId')",
"aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"",
"ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)",
"aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"",
"aws ec2 describe-images --owners self",
"aws ec2 describe-security-groups",
"aws ec2 describe-subnets",
"instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>",
"aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/installing/installing_on_aws |
A.3. Accessing Red Hat Documentation | A.3. Accessing Red Hat Documentation Red Hat Product Documentation located at https://access.redhat.com/documentation/ serves as a central source of information. It provides different kinds of books from release and technical notes to installation, user, and reference guides in HTML, PDF, and EPUB formats. The following is a brief list of documents that are directly or indirectly relevant to this book: Red Hat Software Collections 3.8 Release Notes - The Release Notes for Red Hat Software Collections 3.8 document the major features and contains other information about Red Hat Software Collections, a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages. Red Hat Developer Toolset 12.1 User Guide - The User Guide for Red Hat Developer Toolset 12.1 contains information about Red Hat Developer Toolset, a Red Hat offering for developers on the Red Hat Enterprise Linux platform. Using Software Collections, Red Hat Developer Toolset provides current versions of the GCC compiler, GDB debugger and other binary utilities. Using Red Hat Software Collections 3.8 Container Images - This guide provides information on how to use container images based on Red Hat Software Collections. The available container images include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. Red Hat Enterprise Linux 7 Developer Guide - The Developer Guide for Red Hat Enterprise Linux 7 provides detailed description of Red Hat Developer Toolset features, as well as an introduction to Red Hat Software Collections, and information on libraries and runtime support, compiling and building, debugging, and profiling. Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the deployment, configuration, and administration of Red Hat Enterprise Linux 7. Red Hat Enterprise Linux 6 Developer Guide - The Developer Guide for Red Hat Enterprise Linux 6 provides detailed description of Red Hat Developer Toolset features, as well as an introduction to Red Hat Software Collections, and information on libraries and runtime support, compiling and building, debugging, and profiling. Red Hat Enterprise Linux 6 Deployment Guide - The Deployment Guide for Red Hat Enterprise Linux 6 documents relevant information regarding the deployment, configuration, and administration of Red Hat Enterprise Linux 6. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-online_documentation |
Red Hat OpenStack Services on OpenShift Certification Workflow Guide | Red Hat OpenStack Services on OpenShift Certification Workflow Guide Red Hat Software Certification 2025 For Use with Red Hat OpenStack 18 Red Hat Customer Content Services | [
"subscription-manager register",
"subscription-manager list --available*",
"subscription-manager attach --pool=<pool_ID>",
"subscription-manager repos --enable=cert-1-for-rhel-9-x86_64-rpms",
"dnf install redhat-certification dnf install redhat-certification-rhoso",
"rhoso-cert-init",
"share.share_network_id <network_uuid> share.alt_share_network_id <network_uuid> share.admin_share_network_id <network_uuid> share.security_service <security_service_mapping> share.backend_replication_type <replication_style> share.username_for_user_rules <Username> share.override_ip_for_nfs_access <IP/CIDR> volume.build_interval 10 volume.build_timeout 300 volume.storage_protocol <iSCSI, FC, NVMe-TCP, NFS, etc.> volume.vendor_name <Driver's vendor name>",
"rhoso-cert-run",
"rhoso-cert-logs",
"rhoso-cert-logs Spawning a pod to access the logs in PVC rhoso-cert-cinder-6f855 pod/rhoso-cert-cinder-logs created Waiting for the rhoso-cert-cinder-logs pod to be ready Saving tempest logs Saving logs from the individual Tempest pods: Saving rhoso-cert-cinder-volumes-workflow-step-0-q9btc.log Saving rhoso-cert-cinder-backups-workflow-step-1-5p9xf.log Saving rhoso-cert-cinder-multi-attach-volume-workflow-step-2-zpwh2.log Saving rhoso-cert-cinder-consistency-groups-workflow-step-3-tmb24.log Collect a must-gather report? [y/N] : n Done. Logs are stored in rhoso-cert-cinder-2024-Aug-01_11-48-08",
"rhoso-cert-save",
"rhoso-cert-cleanup",
"cp /usr/share/redhat-certification-rhoso/rhoso-cert-debug.yaml <current test directory>",
"rhoso-cert-init rhoso-cert-run",
"rhoso-cert-test-accounts",
"rhoso-cert-cleanup",
"rhoso-cert-init",
"sudo rhcert-cli login sudo rhcert-cli upload",
"rhcert-cli login",
"rhcert-cli upload",
"rhcert-cli upload --certification-id xxxxx --description \"Any file description\" --file /var/log/redhat-certification/xyz.tgz",
"[user1@testsystem ~]USD sudo rhcert-cli upload Please enter the Certification ID: 625817 Please enter description: Cinder result file upload Please enter the result path: /var/log/redhat-certification/rhoso-cert-cinder-2024-Jul-31_04-25-24.tgz Uploading zip files to Red Hat for the Certification ID: 625817 Authorization failed Please visit https://sso.redhat.com/auth/realms/redhat-external/device?user_code=FOJQ-BLZS and grant the authorization for this host Have you granted the authorization? (yes|no) yes response: yes response: True",
"Success: Test results rhoso-cert-cinder-2024-Jul-31_04-25-24.tgz uploaded to certification ID 625817"
]
| https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html-single/red_hat_openstack_services_on_openshift_certification_workflow_guide/index |
Chapter 5. Kernel | Chapter 5. Kernel Red Hat Enterprise Linux 7 includes the kernel version 3.10, which provides a number of new features, the most notable of which are listed below. Dynamic kernel Patching Red Hat Enterprise Linux 7 introduces kpatch , a dynamic "kernel patching utility", as a Technology Preview. kpatch allows users to manage a collection of binary kernel patches which can be used to dynamically patch the kernel without rebooting. Note that kpatch is supported to run on AMD64 and Intel 64 architectures only. Support for Large crashkernel Sizes Red Hat Enterprise Linux 7 supports the kdump crash dumping mechanism on systems with large memory (up to 3TB). Crashkernel With More Than 1 CPU Red Hat Enterprise Linux 7 enables booting crashkernel with more than one CPU. This function is supported as a Technology Preview. Swap Memory Compression Red Hat Enterprise Linux 7 introduces a new feature, swap memory compression. Swap compression is performed through zswap , a thin back end for frontswap . Utilizing the swap memory compression technology ensures a significant I/O reduction and performance gains. NUMA-Aware Scheduling and Memory Allocation In Red Hat Enterprise Linux 7, the kernel automatically relocates processes and memory between NUMA nodes in the same system, in order to improve performance on systems with non-uniform memory access (NUMA). APIC Virtualization Virtualization of Advanced Programmable Interrupt Controller (APIC) registers is supported by utilizing hardware capabilities of new processors to improve virtual machine monitor (VMM) interrupt handling. vmcp Built in the Kernel In Red Hat Enterprise Linux 7, the vmcp kernel module is built into the kernel. This ensures that the vmcp device node is always present, and users can send IBM z/VM hypervisor control program commands without having to load the vmcp kernel module first. Hardware Error Reporting Mechanism The hardware error reporting mechanisms could previously be problematic because various tools were used to collect errors from different sources with different methods, and different tools were used to report the error events. Red Hat Enterprise Linux 7 introduces Hardware Event Report Mechanism, or HERM. This new infrastructure refactors the Error Detection and Correction (EDAC) mechanism of dual in-line memory module (DIMM) error reporting and also provides new ways to gather system-reported memory errors. The error events are reported to user space in a sequential timeline and single location. HERM in Red Hat Enterprise Linux 7 also introduces a new user space daemon, rasdaemon , which replaces the tools previously included in the edac-utils package. The rasdaemon catches and handles all Reliability, Availability, and Serviceability (RAS) error events that come from the kernel tracing infrastructure, and logs them. HERM in Red Hat Enterprise Linux 7 also provides the tools to report the errors and is able to detect different types of errors such as burst and sparse errors. Full DynTick Support The nohz_full boot parameter extends the original tickless kernel feature to an additional case when the tick can be stopped, when the per-cpu nr_running=1 setting is used. That is, when there is a single runnable task on a CPU's run queue. Blacklisting kernel Modules The modprobe utility included with Red Hat Enterprise Linux 7 allows users to blacklist kernel modules at installation time. To globally disable autoloading of a module, use this option on the kernel command line: For more information on kpatch , see http://rhelblog.redhat.com/2014/02/26/kpatch/ . dm-era Target Red Hat Enterprise Linux 7 introduces the dm-era device-mapper target as a Technology Preview. dm-era keeps track of which blocks were written within a user-defined period of time called an "era". Each era target instance maintains the current era as a monotonically increasing 32-bit counter. This target enables backup software to track which blocks have changed since the last backup. It also allows for partial invalidation of the contents of a cache to restore cache coherency after rolling back to a vendor snapshot. The dm-era target is primarily expected to be paired with the dm-cache target. Concurrent flash MCL updates As a Technology Preview, Microcode level upgrades (MCL) have been enabled in Red Hat Enterprise Linux 7.0 on the IBM System z architecture. These upgrades can be applied without impacting I/O operations to the flash storage media and notify users of the changed flash hardware service level. libhugetlbfs Support for IBM System z The libhugetlbfs library is now supported on IBM System z architecture. The library enables transparent exploitation of large pages in C and C++ programs. Applications and middleware programs can profit from the performance benefits of large pages without changes or recompilations. AMD Microcode and AMD Opteron Support AMD provides microcode patch support for processors belonging to AMD processor families 10h, 11h, 12h, 14h, and 15h. Microcode patches contain fixes for processor errata, which ensures that the processor microcode patch level is at the latest level. One single container file contains all microcode patches for AMD families 10h, 11h, 12h, 14h processors. A separate container file contains patches for AMD family 15h processors. Note that microcode patches are not incremental, therefore, you only need to make sure you have the latest container file for your AMD processor family. To obtain these microcode patches for your AMD-based platform running Red Hat Enterprise Linux 7: Clone the repository with firmware files. Move the AMD microcode files into the /lib/firmware/ directory. As root : Available Memory for /proc/meminfo A new entry to the /proc/meminfo file has been introduced to provide the MemAvailable field. MemAvailable provides an estimate of how much memory is available for starting new applications, without swapping. However, unlike the data provided by the Cache or Free fields, MemAvailable takes into account page cache and also that not all reclaimable memory slabs will be reclaimable due to items being in use. Open vSwitch Kernel Module Red Hat Enterprise Linux 7 includes the Open vSwitch kernel module as an enabler for Red Hat's layered product offerings. Open vSwitch is supported only in conjunction with those products containing the accompanying user space utilities. Please note that without these required user space utilities, Open vSwitch will not function and can not be enabled for use. For more information, please refer to the following Knowledge Base article: https://access.redhat.com/knowledge/articles/270223 . Intel Ethernet Server Adapter X710/XL710 Support Red Hat Enterprise Linux 7 adds the i40e and i40evf kernel drivers, which enable support for Intel X710 and XL710 family Ethernet adapters. These drivers are provided as Technology Preview only. | [
"modprobe.blacklist= module",
"~]USD git clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git",
"~]# cp -r linux-firmware/amd-ucode/ /lib/firmware/"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-kernel |
Chapter 139. KafkaBridgeProducerSpec schema reference | Chapter 139. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeProducerSpec schema properties Configures producer options for the Kafka Bridge. Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: enabled: true config: acks: 1 delivery.timeout.ms: 300000 # ... Use the producer.config properties to configure Kafka options for the producer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate the keys or values of config properties. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 139.1. KafkaBridgeProducerSpec schema properties Property Property type Description enabled boolean Whether the HTTP producer should be enabled or disabled. The default is enabled ( true ). config map The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: enabled: true config: acks: 1 delivery.timeout.ms: 300000 #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkabridgeproducerspec-reference |
Chapter 10. Installation configuration parameters for IBM Cloud | Chapter 10. Installation configuration parameters for IBM Cloud Before you deploy an OpenShift Container Platform cluster on IBM Cloud(R), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 10.1. Available installation configuration parameters for IBM Cloud The following tables specify the required, optional, and IBM Cloud-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 10.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 10.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . If you are deploying the cluster to an existing Virtual Private Cloud (VPC), the CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 10.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 10.1.4. Additional IBM Cloud configuration parameters Additional IBM Cloud(R) configuration parameters are described in the following table: Table 10.4. Additional IBM Cloud(R) parameters Parameter Description Values An IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key that should be used to encrypt the root (boot) volume of only control plane machines. The Cloud Resource Name (CRN) of the root key. The CRN must be enclosed in quotes (""). A Key Protect root key that should be used to encrypt the root (boot) volume of only compute machines. The CRN of the root key. The CRN must be enclosed in quotes (""). A Key Protect root key that should be used to encrypt the root (boot) volume of all of the cluster's machines. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. The CRN of the root key. The CRN must be enclosed in quotes (""). The name of an existing resource group. By default, an installer-provisioned VPC and cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. If you are deploying the cluster into an existing VPC, the installer-provisioned cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. The VPC resources that you have provisioned must exist in a resource group that you specify using the networkResourceGroupName parameter. In either case, this resource group must only be used for a single cluster installation, as the cluster components assume ownership of all of the resources in the resource group. [ 1 ] String, for example existing_resource_group . A list of service endpoint names and URIs. By default, the installation program and cluster components use public service endpoints to access the required IBM Cloud(R) services. If network restrictions limit access to public service endpoints, you can specify an alternate service endpoint to override the default behavior. You can specify only one alternate service endpoint for each of the following services: Cloud Object Storage DNS Services Global Search Global Tagging Identity Services Key Protect Resource Controller Resource Manager VPC A valid service endpoint name and fully qualified URI. Valid names include: COS DNSServices GlobalServices GlobalTagging IAM KeyProtect ResourceController ResourceManager VPC The name of an existing resource group. This resource contains the existing VPC and subnets to which the cluster will be deployed. This parameter is required when deploying the cluster to a VPC that you have provisioned. String, for example existing_network_resource_group . The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud(R) dedicated host profile, such as cx2-host-152x304 . [ 2 ] An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . The instance type for all IBM Cloud(R) machines. Valid IBM Cloud(R) instance type, such as bx2-8x32 . [ 2 ] The name of the existing VPC that you want to deploy your cluster to. String. The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer-provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM(R) documentation. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"controlPlane: platform: ibmcloud: bootVolume: encryptionKey:",
"compute: platform: ibmcloud: bootVolume: encryptionKey:",
"platform: ibmcloud: defaultMachinePlatform: bootvolume: encryptionKey:",
"platform: ibmcloud: resourceGroupName:",
"platform: ibmcloud: serviceEndpoints: - name: url:",
"platform: ibmcloud: networkResourceGroupName:",
"platform: ibmcloud: dedicatedHosts: profile:",
"platform: ibmcloud: dedicatedHosts: name:",
"platform: ibmcloud: type:",
"platform: ibmcloud: vpcName:",
"platform: ibmcloud: controlPlaneSubnets:",
"platform: ibmcloud: computeSubnets:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_cloud/installation-config-parameters-ibm-cloud-vpc |
Chapter 12. Configuring the transactions subsystem | Chapter 12. Configuring the transactions subsystem 12.1. Configuring the transactions subsystem When you manage an enterprise application that processes financial transactions, order management, or other critical workflows, you need to ensure reliable operations and data consistency. The transactions subsystem in JBoss EAP gives you full control over the Transaction Manager TM, letting you configure timeout values, enable transaction logging, and collect statistics. If your application runs across multiple systems and transaction propagation is required then at least one of JBoss Remoting or JTS needs to be enabled. JBoss EAP uses the Narayana transaction manager to handle transactions efficiently. It supports industry-standard protocols like Jakarta Transactions, JTS, and Web Services Transactions. Whether you update databases, send messages, or coordinate distributed services, the transactions subsystem ensures consistency, reliability, and resilience. Additional resources Managing Transactions on JBoss EAP . Note Please note that this link directs you to the Managing Transactions Guide for JBoss EAP 7.4. We are currently in the process of updating the documentation for Red Hat JBoss Enterprise Application Platform 8.0. This link will be updated once the new documentation is complete. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/configuring_the_transactions_subsystem |
Chapter 5. Changing the update approval strategy | Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/updating_openshift_data_foundation/changing-the-update-approval-strategy_rhodf |
Chapter 4. Clustering | Chapter 4. Clustering New Pacemaker features The Red Hat Enterprise Linux 6.8 release supports the following Pacemaker features: You can now use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. When configuring fencing for redundant power supplies, you now are only required to define each device once and to specify that both devices are required to fence the node. The new resource-discovery location constraint option allows you to indicate whether Pacemaker should perform resource discovery on a node for a specified resource. Resources will now start as soon as their state has been confirmed on all nodes and all dependencies have been satisfied, rather than waiting for the state of all resources to be confirmed. This allows for faster startup of some services, and more even startup load. Clone resources support a new clone-min metadata option, specifying that a certain number of instances must be running before any dependent resources can run. This is particularly useful for services behind a virtual IP and haproxy, as is often done with OpenStack. These features are documented in Configuring the Red Hat High Availability Add-On with Pacemaker , available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Configuring_the_Red_Hat_High_Availability_Add-On_with_Pacemaker/index.html. (BZ# 1290458 ) Graceful migration of resources when the pacemaker_remote service is stopped on an active Pacemaker Remote node If the pacemaker_remote service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. Previously, Pacemaker Remote nodes were fenced when the service was stopped (including by commands such as yum update ), unless the node was first explicitly taken out of the cluster. Software upgrades and other routine maintenance procedures are now much easier to perform on Pacemaker Remote nodes. Note: All nodes in the cluster must be upgraded to a version supporting this feature before it can be used on any node. (BZ# 1297564 ) Support for SBD fencing with Pacemaker The SBD (Storage-Based Death) daemon integrates with Pacemaker, a watchdog device, and, optionally, shared storage to arrange for nodes to reliably self-terminate when fencing is required. SBD can be particularly useful in environments where traditional fencing mechanisms are not possible. For information on using SBD with Pacemaker, see https://access.redhat.com/articles/2212861 . (BZ#1313246) The glocktop tool has been added to gfs2-utils The gfs2-utils package now includes the glocktop tool, which can be used to troubleshoot locking-related performance problems that concern the Global File System 2 (GFS2). (BZ#1202817) pcs now supports exporting a cluster configuration to a list of pcs commands With this update, the pcs config export command can be used to export a cluster configuration to a list of pcs commands. Also, the pcs config import-cman command, which converts a CMAN cluster configuration to a Pacemaker cluster configuration, can now output a list of pcs commands that can be used to create the Pacemaker cluster configuration file. As a result, the user can determine what commands can be used to set up a cluster based on its configuration files. (BZ# 1264795 ) Fence agent for APC now supports firmware 6.x The fence agent for APC now support firmware 6.x. (BZ# 1259254 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_clustering |
Chapter 33. Servers and Services | Chapter 33. Servers and Services The named service now binds to all interfaces With this update, BIND is able to react to situations when a new IP address is added to an interface. If the new address is allowed by the configuration, BIND will automatically start to listen on that interface. (BZ# 1294506 ) Fix for tomcat-digest to generate password hashes When using the tomcat-digest utility to create an SHA hash of Tomcat passwords, the command terminated unexpectedly with the ClassNotFoundException Java exception. A patch has been provided to fix this bug and tomcat-digest now generates password hashes as expected. (BZ# 1240279 ) Tomcat can now use shell expansion in configuration files within the new conf.d directory Previously, the /etc/sysconfig/tomcat and /etc/tomcat/tomcat.conf files were loaded without shell expansion, causing the application to terminate unexpectedly. This update provides a mechanism for using shell expansion in the Tomcat configuration files by adding a new configuration directory, /etc/tomcat/conf.d . Any files placed in the new directory may now include shell variables. (BZ# 1221896 ) Fix for the tomcat-jsvc service unit to create two independent Tomcat servers When trying to start multiple independent Tomcat servers, the second server failed to start due to the jsvc service returning an error. This update fixes the jsvc systemd service unit as well as the handling of the TOMCAT_USER variable. (BZ# 1201409 ) The dbus-daemon service no longer becomes unresponsive due to leaking file descriptors Previously, the dbus-daemon service incorrectly handled multiple messages containing file descriptors if they were received in a short time period. As a consequence, dbus-daemon leaked file descriptors and became unresponsive. A patch has been applied to correctly handle multiple file descriptors from different messages inside dbus-daemon . As a result, dbus-daemon closes and passes file descriptors correctly and no longer becomes unresponsive in the described situation. (BZ# 1325870 ) Update for marking tomcat-admin-webapps package configration files Previously, the tomcat-admin-webapps web.xml files were not marked as the configuration files. Consequently, upgrading the tomcat-admin-webapps package overwrote the /usr/share/tomcat/webapps/host-manager/WEB-INF/web.xml and /usr/share/tomcat/webapps/manager/WEB-INF/web.xml files, causing custom user configuration to be automatically removed. This update fixes classification of these files, thus preventing this problem. (BZ# 1208402 ) Ghostcript no longer hangs when converting a PDF file to PNG Previously, when converting a PDF file into a PNG file, Ghostscript could become unresponsive. This bug has been fixed, and the conversion time is now proportional to the size of the PDF file being converted. (BZ# 1302121 ) The named-chroot service now starts correctly Due to a regression, the -t /var/named/chroot option was omitted in the named-chroot.service file. As a consequence, if the /etc/named.conf file was missing, the named-chroot service failed to start. Additionally, if different named.conf files existed in the /etc/ and /var/named/chroot/etc/ directories, the named-checkconf utility incorrectly checked the one in the changed-root directory when the service was started. With this update, the option in the service file has been added and the named-chroot service now works correctly. (BZ# 1278082 ) AT-SPI2 driver added to brltty The Assistive Technology Service Provider Interface driver version 2 (AT-SPI2) has been added to the brltty daemon. AT-SPI2 enables using brltty with, for example, the GNOME Accessibility Toolkit. (BZ# 1324672 ) A new --ignore-missing option for tuned-adm verify The --ignore-missing command-line option has been added to the tuned-adm verify command. This command verifies whether a Tuned profile has been successfully applied, and displays differences between the requested Tuned profile and the current system settings. The --ignore-missing parameter causes tuned-adm verify to silently skip features that are not supported on the system, thus preventing the described errors. (BZ# 1243807 ) The new modules Tuned plug-in The modules plug-in allows Tuned to load and reload kernel modules with parameters specified in the the settings of the Tuned profiles. (BZ# 1249618 ) The number of inotify user watches increased to 65536 To allow for more pods on an Red Hat Enterprise Linux Atomic host, the number of inotify user watches has been increased by a factor of 8 to 65536. (BZ# 1322001 ) Timer migration for realtime Tuned profile has been disabled Previously, the realtime Tuned profile that is included in the tuned-profiles-realtime package set the value of the kernel.timer_migration variable to 1. As a consequence, realtime applications could be negatively affected. This update disables the timer migration in the realtime profile. (BZ# 1323283 ) rcu-nocbs no longer missing from kernel boot parameters Previously, the rcu_nocbs kernel parameter was not set in the realtime-virtual-host and realtime-virtual-guest tuned profiles. With this update, rcu-nocbs is set as expected. (BZ# 1334479 ) The global limit on how much time realtime scheduling may use has been removed in realtime Tuned profile Prior to this update, the Tuned utility configuration for the kernel.sched_rt_runtime_us sysctl variable in the realtime profile included in the tuned-profiles-realtime package was incorrect. As a consequence, creating a virtual machine instance caused an error due to incompatible scheduling time. Now, the value of kernel.sched_rt_runtime_us is set to -1 (no limit), and the described problem no longer occurs. (BZ# 1346715 ) sapconf now detects the NTP configuration properly Previously, the sapconf utility did not check whether the host system was configured to use the Network Time Protocol (NTP). As a consequence, even when NTP was configured, sapconf displayed the following error: With this update, sapconf properly checks for the NTP configuration, and the described problem no longer occurs. (BZ# 1228550 ) sapconf lists default packages correctly Prior to this update, the sapconf utility passed an incorrect parameter to the repoquery utility, which caused sapconf not to list the default packages in package groups. The bug has been fixed, and sapconf now lists default packages as expected. (BZ#1235608) The logrotate utility now saves status to the /var/lib/logrotate/ directory Previously, the logrotate utility saved status to the /var/lib/logrotate.status file. Consequently, logrotate did not work on systems where /var/lib was a read-only file system. With this update, the status file has been moved to the new /var/lib/logrotate/ directory, which can be mounted with write permissions. As a result, logrotate now works on systems where /var/lib is a read-only file system. (BZ# 1272236 ) Support for printing to an SMB printer using Kerberos using cups With this update, the cups package creates the symbolic link /usr/lib/cups/backend/smb referring to the /usr/libexec/samba/cups_backend_smb file. The symbolic link is used by the smb_krb5_wrapper utility to print to an server message block (SMB)-shared printer using Kerberos authentication. (BZ#1302055) Newly installed tomcat package has a correct shell pointing to /sbin/nologin Previously, the postinstall script set the Tomcat shell to /bin/nologin , which does not exist. Consequently, users failed to get a helpful message about the login access denial when attempting to log in as Tomcat user. This bug has been fixed, and the postinstall script now corectly sets the Tomcat shell to /sbin/nologin . (BZ# 1277197 ) | [
"3: NTP Service should be configured and started"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/bug_fixes_servers_and_services |
Preface | Preface This guide provides information and instructions for implementing Fuse transactional applications. The information is organized as follows: Chapter 1, Introduction to transactions Chapter 2, Getting started with transactions on Karaf (OSGi) Chapter 3, Interfaces for configuring and referencing transaction managers Chapter 4, Configuring the Narayana transaction manager Chapter 5, Using the Narayana transaction manager Chapter 6, Using JDBC data sources Chapter 7, Using JMS connection factories Chapter 8, About Java connector architecture Chapter 9, Writing a Camel application that uses transactions | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_transaction_guide/pr01 |
Chapter 2. CSIDriver [storage.k8s.io/v1] | Chapter 2. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CSIDriverSpec is the specification of a CSIDriver. 2.1.1. .spec Description CSIDriverSpec is the specification of a CSIDriver. Type object Property Type Description attachRequired boolean attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called. This field is immutable. fsGroupPolicy string fsGroupPolicy defines if the underlying volume supports changing ownership and permission of the volume before being mounted. Refer to the specific FSGroupPolicy values for additional details. This field is immutable. Defaults to ReadWriteOnceWithFSType, which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume. With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume's access mode contains ReadWriteOnce. podInfoOnMount boolean podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations, if set to true. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeContext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. "csi.storage.k8s.io/pod.name": pod.Name "csi.storage.k8s.io/pod.namespace": pod.Namespace "csi.storage.k8s.io/pod.uid": string(pod.UID) "csi.storage.k8s.io/ephemeral": "true" if the volume is an ephemeral inline volume defined by a CSIVolumeSource, otherwise "false" "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode. Other drivers can leave pod info disabled and/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver. This field is immutable. requiresRepublish boolean requiresRepublish indicates the CSI driver wants NodePublishVolume being periodically called to reflect any possible change in the mounted volume. This field defaults to false. Note: After a successful initial NodePublishVolume call, subsequent calls to NodePublishVolume should only update the contents of the volume. New mount points will not be seen by a running container. seLinuxMount boolean seLinuxMount specifies if the CSI driver supports "-o context" mount option. When "true", the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different -o context options. This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes. Kubernetes will call NodeStage / NodePublish with "-o context=xyz" mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context. In the future, it may be expanded to other volume AccessModes. In any case, Kubernetes will ensure that the volume is mounted only with a single SELinux context. When "false", Kubernetes won't pass any special SELinux mount options to the driver. This is typical for volumes that represent subdirectories of a bigger shared filesystem. Default is "false". storageCapacity boolean storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information, if set to true. The check can be enabled immediately when deploying a driver. In that case, provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object. Alternatively, the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published. This field was immutable in Kubernetes ⇐ 1.22 and now is mutable. tokenRequests array tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. tokenRequests[] object TokenRequest contains parameters of a service account token. volumeLifecycleModes array (string) volumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is "Persistent", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV/PVC mechanism. The other mode is "Ephemeral". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume. For more information about implementing this mode, see https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future. This field is beta. This field is immutable. 2.1.2. .spec.tokenRequests Description tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. Type array 2.1.3. .spec.tokenRequests[] Description TokenRequest contains parameters of a service account token. Type object Required audience Property Type Description audience string audience is the intended audience of the token in "TokenRequestSpec". It will default to the audiences of kube apiserver. expirationSeconds integer expirationSeconds is the duration of validity of the token in "TokenRequestSpec". It has the same default value of "ExpirationSeconds" in "TokenRequestSpec". 2.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csidrivers DELETE : delete collection of CSIDriver GET : list or watch objects of kind CSIDriver POST : create a CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers GET : watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csidrivers/{name} DELETE : delete a CSIDriver GET : read the specified CSIDriver PATCH : partially update the specified CSIDriver PUT : replace the specified CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers/{name} GET : watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/storage.k8s.io/v1/csidrivers HTTP method DELETE Description delete collection of CSIDriver Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIDriver Table 2.3. HTTP responses HTTP code Reponse body 200 - OK CSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIDriver Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body CSIDriver schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty 2.2.2. /apis/storage.k8s.io/v1/watch/csidrivers HTTP method GET Description watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/storage.k8s.io/v1/csidrivers/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the CSIDriver HTTP method DELETE Description delete a CSIDriver Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIDriver Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIDriver Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIDriver Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body CSIDriver schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty 2.2.4. /apis/storage.k8s.io/v1/watch/csidrivers/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the CSIDriver HTTP method GET Description watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage_apis/csidriver-storage-k8s-io-v1 |
1.3. Resource Controllers in Linux Kernel | 1.3. Resource Controllers in Linux Kernel A resource controller, also called a cgroup subsystem, represents a single resource, such as CPU time or memory. The Linux kernel provides a range of resource controllers, that are mounted automatically by systemd . Find the list of currently mounted resource controllers in /proc/cgroups , or use the lssubsys monitoring tool. In Red Hat Enterprise Linux 7, systemd mounts the following controllers by default: Available Controllers in Red Hat Enterprise Linux 7 blkio - sets limits on input/output access to and from block devices; cpu - uses the CPU scheduler to provide cgroup tasks access to the CPU. It is mounted together with the cpuacct controller on the same mount; cpuacct - creates automatic reports on CPU resources used by tasks in a cgroup. It is mounted together with the cpu controller on the same mount; cpuset - assigns individual CPUs (on a multicore system) and memory nodes to tasks in a cgroup; devices - allows or denies access to devices for tasks in a cgroup; freezer - suspends or resumes tasks in a cgroup; memory - sets limits on memory use by tasks in a cgroup and generates automatic reports on memory resources used by those tasks; net_cls - tags network packets with a class identifier ( classid ) that allows the Linux traffic controller (the tc command) to identify packets originating from a particular cgroup task. A subsystem of net_cls , the net_filter (iptables) can also use this tag to perform actions on such packets. The net_filter tags network sockets with a firewall identifier ( fwid ) that allows the Linux firewall (the iptables command) to identify packets (skb->sk) originating from a particular cgroup task; perf_event - enables monitoring cgroups with the perf tool; hugetlb - allows to use virtual memory pages of large sizes and to enforce resource limits on these pages. The Linux kernel exposes a wide range of tunable parameters for resource controllers that can be configured with systemd . See the kernel documentation (list of references in the Controller-Specific Kernel Documentation section) for detailed description of these parameters. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/br-Resource_Controllers_in_Linux_Kernel |
Chapter 6. Securing access to Kafka | Chapter 6. Securing access to Kafka Secure your Kafka cluster by managing the access a client has to Kafka brokers. Specify configuration options to secure Kafka brokers and clients A secure connection between Kafka brokers and clients can encompass the following: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users The authentication and authorization mechanisms specified for a client must match those specified for the Kafka brokers. 6.1. Listener configuration Encryption and authentication in Kafka brokers is configured per listener. For more information about Kafka listener configuration, see Section 5.4.2, "Listeners" . Each listener in the Kafka broker is configured with its own security protocol. The configuration property listener.security.protocol.map defines which listener uses which security protocol. It maps each listener name to its security protocol. Supported security protocols are: PLAINTEXT Listener without any encryption or authentication. SSL Listener using TLS encryption and, optionally, authentication using TLS client certificates. SASL_PLAINTEXT Listener without encryption but with SASL-based authentication. SASL_SSL Listener with TLS-based encryption and SASL-based authentication. Given the following listeners configuration: the listener.security.protocol.map might look like this: This would configure the listener INT1 to use unencrypted connections with SASL authentication, the listener INT2 to use encrypted connections with SASL authentication and the REPLICATION interface to use TLS encryption (possibly with TLS client authentication). The same security protocol can be used multiple times. The following example is also a valid configuration: Such a configuration would use TLS encryption and TLS authentication (optional) for all interfaces. 6.2. TLS Encryption Kafka supports TLS for encrypting communication with Kafka clients. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Keystore (JKS) format. A path to this file is set in the ssl.keystore.location property. The ssl.keystore.password property should be used to set the password protecting the keystore. For example: In some cases, an additional password is used to protect the private key. Any such password can be set using the ssl.key.password property. Kafka is able to use keys signed by certification authorities as well as self-signed keys. Using keys signed by certification authorities should always be the preferred method. In order to allow clients to verify the identity of the Kafka broker they are connecting to, the certificate should always contain the advertised hostname(s) as its Common Name (CN) or in the Subject Alternative Names (SAN). It is possible to use different SSL configurations for different listeners. All options starting with ssl. can be prefixed with listener.name.<NameOfTheListener>. , where the name of the listener has to be always in lowercase. This will override the default SSL configuration for that specific listener. The following example shows how to use different SSL configurations for different listeners: Additional TLS configuration options In addition to the main TLS configuration options described above, Kafka supports many options for fine-tuning the TLS configuration. For example, to enable or disable TLS / SSL protocols or cipher suites: ssl.cipher.suites List of enabled cipher suites. Each cipher suite is a combination of authentication, encryption, MAC and key exchange algorithms used for the TLS connection. By default, all available cipher suites are enabled. ssl.enabled.protocols List of enabled TLS / SSL protocols. Defaults to TLSv1.2,TLSv1.1,TLSv1 . 6.2.1. Enabling TLS encryption This procedure describes how to enable encryption in Kafka brokers. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Generate TLS certificates for all Kafka brokers in your cluster. The certificates should have their advertised and bootstrap addresses in their Common Name or Subject Alternative Name. Edit the Kafka configuration properties file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SSL protocol for the listener where you want to use TLS encryption. Set the ssl.keystore.location option to the path to the JKS keystore with the broker certificate. Set the ssl.keystore.password option to the password you used to protect the keystore. For example: (Re)start the Kafka brokers 6.3. Authentication To authenticate client connections to your Kafka cluster, the following options are available: TLS client authentication TLS (Transport Layer Security) using X.509 certificates on encrypted connections Kafka SASL Kafka SASL (Simple Authentication and Security Layer) using supported authentication mechanisms OAuth 2.0 OAuth 2.0 token-based authentication SASL authentication supports various mechanisms for both plain unencrypted connections and TLS connections: PLAIN ― Authentication based on usernames and passwords. SCRAM-SHA-256 and SCRAM-SHA-512 ― Authentication using Salted Challenge Response Authentication Mechanism (SCRAM). GSSAPI ― Authentication against a Kerberos server. Warning The PLAIN mechanism sends usernames and passwords over the network in an unencrypted format. It should only be used in combination with TLS encryption. 6.3.1. Enabling TLS client authentication Enable TLS client authentication in Kafka brokers to enhance security for connections to Kafka nodes already using TLS encryption. Use the ssl.client.auth property to set TLS authentication with one of these values: none ― TLS client authentication is off (default) requested ― Optional TLS client authentication required ― Clients must authenticate using a TLS client certificate When a client authenticates using TLS client authentication, the authenticated principal name is derived from the distinguished name in the client certificate. For instance, a user with a certificate having a distinguished name CN=someuser will be authenticated with the principal CN=someuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown . This principal name provides a unique identifier for the authenticated user or entity. When TLS client authentication is not used, and SASL is disabled, the principal name defaults to ANONYMOUS . Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. TLS encryption is enabled . Procedure Prepare a JKS (Java Keystore ) truststore containing the public key of the CA (Certification Authority) used to sign the user certificates. Edit the Kafka configuration properties file on all cluster nodes as follows: Specify the path to the JKS truststore using the ssl.truststore.location property. If the truststore is password-protected, set the password using ssl.truststore.password property. Set the ssl.client.auth property to required . TLS client authentication configuration (Re)start the Kafka brokers. 6.3.2. Enabling SASL PLAIN client authentication Enable SASL PLAIN authentication in Kafka to enhance security for connections to Kafka nodes. SASL authentication is enabled through the Java Authentication and Authorization Service (JAAS) using the KafkaServer JAAS context. You can define the JAAS configuration in a dedicated file or directly in the Kafka configuration. The recommended location for the dedicated file is ./config/jaas.conf . Ensure that the file is readable by the Kafka user. Keep the JAAS configuration file in sync on all Kafka nodes. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit or create the ./config/jaas.conf JAAS configuration file to enable the PlainLoginModule and specify the allowed usernames and passwords. Make sure this file is the same on all Kafka brokers. JAAS configuration Edit the Kafka configuration properties file on all cluster nodes as follows: Enable SASL PLAIN authentication on specific listeners using the listener.security.protocol.map property. Specify SASL_PLAINTEXT or SASL_SSL . Set the sasl.enabled.mechanisms property to PLAIN . SASL plain configuration (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers: 6.3.3. Enabling SASL SCRAM client authentication Enable SASL SCRAM authentication in Kafka to enhance security for connections to Kafka nodes. SASL authentication is enabled through the Java Authentication and Authorization Service (JAAS) using the KafkaServer JAAS context. You can define the JAAS configuration in a dedicated file or directly in the Kafka configuration. The recommended location for the dedicated file is ./config/jaas.conf . Ensure that the file is readable by the Kafka user. Keep the JAAS configuration file in sync on all Kafka nodes. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Edit or create the ./config/jaas.conf JAAS configuration file to enable the ScramLoginModule . Make sure this file is the same on all Kafka brokers. JAAS configuration Edit the Kafka configuration properties file on all cluster nodes as follows: Enable SASL SCRAM authentication on specific listeners using the listener.security.protocol.map property. Specify SASL_PLAINTEXT or SASL_SSL . Set the sasl.enabled.mechanisms option to SCRAM-SHA-256 or SCRAM-SHA-512 . For example: (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers. 6.3.4. Enabling multiple SASL mechanisms When using SASL authentication, you can enable more than one mechanism. Kafka can use more than one SASL mechanism simultaneously. When multiple mechanisms are enabled, you can choose the mechanism specific clients use. To use more than one mechanism, you set up the configuration required for each mechanism. You can add different KafkaServer JAAS configurations to the same context and enable more than one mechanism in the Kafka configuration as a comma-separated list using the sasl.mechanism.inter.broker.protocol property. JAAS configuration for more than one SASL mechanism SASL mechanisms enabled 6.3.5. Enabling SASL for inter-broker authentication Enable SASL SCRAM authentication between Kafka nodes to enhance security for inter-broker connections. As well as using SASL authentication for client connections to a Kafka cluster, you can also use SASL for inter-broker authentication. Unlike SASL for client connections, you can only choose one mechanism for inter-broker communication. Prerequisites ZooKeeper is installed on each host , and the configuration files are available. If you are using a SCRAM mechanism, register SCRAM credentials on the Kafka cluster. For all nodes in the Kafka cluster, add the inter-broker SASL SCRAM user to ZooKeeper. This ensures that the credentials for authentication are updated for bootstrapping before the Kafka cluster is running. Registering an inter-broker SASL SCRAM user bin/kafka-configs.sh \ --zookeeper localhost:2181 \ --alter \ --add-config 'SCRAM-SHA-512=[password=changeit]' \ --entity-type users \ --entity-name kafka Procedure Specify an inter-broker SASL mechanism in the Kafka configuration using the sasl.mechanism.inter.broker.protocol property. Inter-broker SASL mechanism (Optional) If you are using a SCRAM mechanism, register SCRAM credentials on the Kafka cluster by adding SCRAM users . This ensures that the credentials for authentication are updated for bootstrapping before the Kafka cluster is running. Specify the username and password for inter-broker communication in the KafkaServer JAAS context using the username and password fields. Inter-broker JAAS context 6.3.6. Adding SASL SCRAM users This procedure outlines the steps to register new users for authentication using SASL SCRAM in Kafka. SASL SCRAM authentication enhances the security of client connections. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to add new SASL SCRAM users. ./bin/kafka-configs.sh \ --bootstrap-server <broker_host>:<port> \ --alter \ --add-config 'SCRAM-SHA-512=[password=<password>]' \ --entity-type users --entity-name <username> For example: ./bin/kafka-configs.sh \ --bootstrap-server localhost:9092 \ --alter \ --add-config 'SCRAM-SHA-512=[password=123456]' \ --entity-type users \ --entity-name user1 6.3.7. Deleting SASL SCRAM users This procedure outlines the steps to remove users registered for authentication using SASL SCRAM in Kafka. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to delete SASL SCRAM users. /bin/kafka-configs.sh \ --bootstrap-server <broker_host>:<port> \ --alter \ --delete-config 'SCRAM-SHA-512' \ --entity-type users \ --entity-name <username> For example: /bin/kafka-configs.sh \ --bootstrap-server localhost:9092 \ --alter \ --delete-config 'SCRAM-SHA-512' \ --entity-type users \ --entity-name user1 6.3.8. Enabling Kerberos (GSSAPI) authentication Streams for Apache Kafka supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes. Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC). This procedure shows how to configure Streams for Apache Kafka so that Kafka clients can access Kafka and ZooKeeper using Kerberos (GSSAPI) authentication. For this setup, Kafka is installed in the /opt/kafka/ directory. The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host. The procedure shows, with examples, how to configure: Service principals Kafka brokers to use the Kerberos login ZooKeeper to use Kerberos login Producer and consumer clients to access Kafka using Kerberos authentication The instructions describe Kerberos set up for a single ZooKeeper and Kafka installation on a single host, with additional configuration for a producer and consumer client. Prerequisites You are logged in to Red Hat Enterprise Linux as the Kafka user. To be able to configure Kafka and ZooKeeper to authenticate and authorize Kerberos credentials, you will need: Access to a Kerberos server A Kerberos client on each Kafka broker host For more information on the steps to set up a Kerberos server, and clients on broker hosts, see the example Kerberos on RHEL set up configuration . Add service principals for authentication From your Kerberos server, create service principals (users) for ZooKeeper, Kafka brokers, and Kafka producer and consumer clients. Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM . Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC. Make sure the domain name in the Kerberos principal is in uppercase. For example: zookeeper/[email protected] kafka/[email protected] producer1/[email protected] consumer1/[email protected] The ZooKeeper service principal must have the same hostname as the zookeeper.connect configuration in the Kafka config/server.properties file: zookeeper.connect= node1.example.redhat.com :2181 If the hostname is not the same, localhost is used and authentication will fail. Create a directory on the host and add the keytab files: For example: /opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab Configure ZooKeeper to use a Kerberos Login Configure ZooKeeper to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for zookeeper . Create or modify the opt/kafka/config/jaas.conf file to support ZooKeeper client and server operations: Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" 4 principal="zookeeper/[email protected]"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; 1 Set to true to get the principal key from the keytab. 2 Set to true to store the principal key. 3 Set to true to obtain the Ticket Granting Ticket (TGT) from the ticket cache. 4 The keyTab property points to the location of the keytab file copied from the Kerberos KDC. The location and file must be readable by the Kafka user. 5 The principal property is configured to match the fully-qualified principal name created on the KDC host, which follows the format SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-NAME . Edit opt/kafka/config/zookeeper.properties to use the updated JAAS configuration: # ... requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20 1 Controls the frequency for login renewal in milliseconds, which can be adjusted to suit ticket renewal intervals. Default is one hour. 2 Dictates whether the hostname is used as part of the login principal name. If using a single keytab for all nodes in the cluster, this is set to true . However, it is recommended to generate a separate keytab and fully-qualified principal for each broker host for troubleshooting. 3 Controls whether the realm name is stripped from the principal name for Kerberos negotiations. It is recommended that this setting is set as false . 4 Enables SASL authentication mechanisms for the ZooKeeper server and client. 5 The RequireSasl properties controls whether SASL authentication is required for quorum events, such as master elections. 6 The loginContext properties identify the name of the login context in the JAAS configuration used for authentication configuration of the specified component. The loginContext names correspond to the names of the relevant sections in the opt/kafka/config/jaas.conf file. 7 Controls the naming convention to be used to form the principal name used for identification. The placeholder _HOST is automatically resolved to the hostnames defined by the server.1 properties at runtime. Start ZooKeeper with JVM parameters to specify the Kerberos login configuration: export EXTRA_ARGS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties If you are not using the default service name ( zookeeper ), add the name using the -Dzookeeper.sasl.client.username= NAME parameter. Note If you are using the /etc/krb5.conf location, you do not need to specify -Djava.security.krb5.conf=/etc/krb5.conf when starting ZooKeeper, Kafka, or the Kafka producer and consumer. Configure the Kafka broker server to use a Kerberos login Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka . Modify the opt/kafka/config/jaas.conf file with the following elements: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; Configure each broker in the Kafka cluster by modifying the listener configuration in the config/server.properties file so the listeners use the SASL/GSSAPI login. Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols. For example: # ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 ... 1 Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications. 2 For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the ssl.* properties. 3 SASL mechanism for Kerberos authentication is GSSAPI . 4 Kerberos authentication for inter-broker communication. 5 The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration. Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration: export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties If the broker and ZooKeeper cluster were previously configured and working with a non-Kerberos-based authentication system, it is possible to start the ZooKeeper and broker cluster and check for configuration errors in the logs. After starting the broker and Zookeeper instances, the cluster is now configured for Kerberos authentication. Configure Kafka producer and consumer clients to use Kerberos authentication Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1 and consumer1 . Add the Kerberos configuration to the producer or consumer configuration file. For example: Configuration in producer.properties # ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/[email protected]"; # ... 1 Configuration for Kerberos (GSSAPI) authentication. 2 Kerberos uses the SASL plaintext (username/password) security protocol. 3 The service principal (user) for Kafka that was configured in the Kerberos KDC. 4 Configuration for the JAAS using the same properties defined in jaas.conf . Configuration in consumer.properties # ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/[email protected]"; # ... Run the clients to verify that you can send and receive messages from the Kafka brokers. Producer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Consumer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Additional resources Kerberos man pages: krb5.conf(5), kinit(1), klist(1), and kdestroy(1) Example Kerberos server on RHEL set up configuration Example client application to authenticate with a Kafka cluster using Kerberos tickets 6.4. Authorization Authorization in Kafka brokers is implemented using authorizer plugins. In this section we describe how to use the AclAuthorizer plugin provided with Kafka. Alternatively, you can use your own authorization plugins. For example, if you are using OAuth 2.0 token-based authentication , you can use OAuth 2.0 authorization . 6.4.1. Enabling an ACL authorizer Edit the ./config/server.properties file to add an ACL authorizer. Enable the authorizer by specifying its fully-qualified name in the authorizer.class.name property: Enabling the authorizer authorizer.class.name=kafka.security.authorizer.AclAuthorizer For AclAuthorizer , the fully-qualified name is kafka.security.authorizer.AclAuthorizer . 6.4.1.1. ACL rules An ACL authorizer uses ACL rules to manage access to Kafka brokers. ACL rules are defined in the following format: Principal P is allowed / denied <operation> O on <kafka_resource> R from host H For example, a rule might be set so that user John can view the topic comments from host 127.0.0.1 . Host is the IP address of the machine that John is connecting from. In most cases, the user is a producer or consumer application: Consumer01 can write to the consumer group accounts from host 127.0.0.1 If ACL rules are not present for a given resource, all actions are denied. This behavior can be changed by setting the property allow.everyone.if.no.acl.found to true in the Kafka configuration file ./config/server.properties . 6.4.1.2. Principals A principal represents the identity of a user. The format of the ID depends on the authentication mechanism used by clients to connect to Kafka: User:ANONYMOUS when connected without authentication. User:<username> when connected using simple authentication mechanisms, such as PLAIN or SCRAM. For example User:admin or User:user1 . User:<DistinguishedName> when connected using TLS client authentication. For example User:CN=user1,O=MyCompany,L=Prague,C=CZ . User:<Kerberos username> when connected using Kerberos. The DistinguishedName is the distinguished name from the client certificate. The Kerberos username is the primary part of the Kerberos principal, which is used by default when connecting using Kerberos. You can use the sasl.kerberos.principal.to.local.rules property to configure how the Kafka principal is built from the Kerberos principal. 6.4.1.3. Authentication of users To use authorization, you need to have authentication enabled and used by your clients. Otherwise, all connections will have the principal User:ANONYMOUS . For more information on methods of authentication, see Section 6.3, "Authentication" . 6.4.1.4. Super users Super users are allowed to take all actions regardless of the ACL rules. Super users are defined in the Kafka configuration file using the property super.users . For example: 6.4.1.5. Replica broker authentication When authorization is enabled, it is applied to all listeners and all connections. This includes the inter-broker connections used for replication of data between brokers. If enabling authorization, therefore, ensure that you use authentication for inter-broker connections and give the users used by the brokers sufficient rights. For example, if authentication between brokers uses the kafka-broker user, then super user configuration must include the username super.users=User:kafka-broker . Note For more information on the operations on Kafka resources you can control with ACLs, see the Apache Kafka documentation . 6.4.2. Adding ACL rules When using an ACL authorizer to control access to Kafka based on Access Control Lists (ACLs), you can add new ACL rules using the kafka-acls.sh utility. Use kafka-acls.sh parameter options to add, list and remove ACL rules, and perform other functions. The parameters require a double-hyphen convention, such as --add . Prerequisites Users have been created and granted appropriate permissions to access Kafka resources. Streams for Apache Kafka is installed on each host , and the configuration files are available. Authorization is enabled in Kafka brokers. Procedure Run kafka-acls.sh with the --add option. Examples: Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Deny user1 access to read myTopic from IP address host 127.0.0.1 . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 Add user1 as the consumer of myTopic with MyConsumerGroup . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 6.4.3. Listing ACL rules When using an ACL authorizer to control access to Kafka based on Access Control Lists (ACLs), you can list existing ACL rules using the kafka-acls.sh utility. Prerequisites ACLs have been added . Procedure Run kafka-acls.sh with the --list option. For example: opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1 6.4.4. Removing ACL rules When using an ACL authorizer to control access to Kafka based on Access Control Lists (ACLs), you can remove existing ACL rules using the kafka-acls.sh utility. Prerequisites ACLs have been added . Procedure Run kafka-acls.sh with the --remove option. Examples: Remove the ACL allowing Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Remove the ACL adding user1 as the consumer of myTopic with MyConsumerGroup . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 Remove the ACL denying user1 access to read myTopic from IP address host 127.0.0.1 . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 6.5. ZooKeeper authentication By default, connections between ZooKeeper and Kafka are not authenticated. However, Kafka and ZooKeeper support Java Authentication and Authorization Service (JAAS) which can be used to set up authentication using Simple Authentication and Security Layer (SASL). ZooKeeper supports authentication using the DIGEST-MD5 SASL mechanism with locally stored credentials. 6.5.1. JAAS Configuration SASL authentication for ZooKeeper connections has to be configured in the JAAS configuration file. By default, Kafka will use the JAAS context named Client for connecting to ZooKeeper. The Client context should be configured in the ./config/jass.conf file. The context has to enable the PLAIN SASL authentication, as in the following example: 6.5.2. Enabling ZooKeeper authentication This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism when connecting to ZooKeeper. Prerequisites Client-to-server authentication is enabled in ZooKeeper Enabling SASL DIGEST-MD5 authentication On all Kafka broker nodes, create or edit the ./config/jaas.conf JAAS configuration file and add the following context: The username and password should be the same as configured in ZooKeeper. Following example shows the Client context: Restart all Kafka broker nodes one by one. To pass the JAAS configuration to Kafka brokers, use the KAFKA_OPTS environment variable. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Additional resources Authentication 6.6. ZooKeeper authorization When authentication is enabled between Kafka and ZooKeeper, you can use ZooKeeper Access Control List (ACL) rules to automatically control access to Kafka's metadata stored in ZooKeeper. 6.6.1. ACL Configuration Enforcement of ZooKeeper ACL rules is controlled by the zookeeper.set.acl property in the config/server.properties Kafka configuration file. The property is disabled by default and enabled by setting to true : If ACL rules are enabled, when a znode is created in ZooKeeper only the Kafka user who created it can modify or delete it. All other users have read-only access. Kafka sets ACL rules only for newly created ZooKeeper znodes . If the ACLs are only enabled after the first start of the cluster, the zookeeper-security-migration.sh tool can set ACLs on all existing znodes . Confidentiality of data in ZooKeeper Data stored in ZooKeeper includes: Topic names and their configuration Salted and hashed user credentials when SASL SCRAM authentication is used. But ZooKeeper does not store any records sent and received using Kafka. The data stored in ZooKeeper is assumed to be non-confidential. If the data is to be regarded as confidential (for example because topic names contain customer IDs), the only option available for protection is isolating ZooKeeper on the network level and allowing access only to Kafka brokers. 6.6.2. Enabling ZooKeeper ACLs for a new Kafka cluster This procedure describes how to enable ZooKeeper ACLs in Kafka configuration for a new Kafka cluster. Use this procedure only before the first start of the Kafka cluster. For enabling ZooKeeper ACLs in a cluster that is already running, see Section 6.6.3, "Enabling ZooKeeper ACLs in an existing Kafka cluster" . Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. A ZooKeeper cluster is configured and running . Client-to-server authentication is enabled in ZooKeeper. ZooKeeper authentication is enabled in the Kafka brokers. Kafka brokers have not yet been started. Procedure Edit the Kafka configuration properties file to set the zookeeper.set.acl field to true on all cluster nodes. Start the Kafka brokers. 6.6.3. Enabling ZooKeeper ACLs in an existing Kafka cluster This procedure describes how to enable ZooKeeper ACLs in Kafka configuration for a Kafka cluster that is running. Use the zookeeper-security-migration.sh tool to set ZooKeeper ACLs on all existing znodes . The zookeeper-security-migration.sh is available as part of Streams for Apache Kafka, and can be found in the bin directory. Prerequisites Kafka cluster is configured and running . Enabling the ZooKeeper ACLs Edit the Kafka configuration properties file to set the zookeeper.set.acl field to true on all cluster nodes. Restart all Kafka brokers one by one. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Set the ACLs on all existing ZooKeeper znodes using the zookeeper-security-migration.sh tool. Replace <zookeeper_url> with the connection string for your ZooKeeper cluster, such as localhost:2181 . | [
"listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094",
"listener.security.protocol.map=INT1:SASL_PLAINTEXT,INT2:SASL_SSL,REPLICATION:SSL",
"listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL",
"ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456",
"listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094 listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL Default configuration - will be used for listeners INT1 and INT2 ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456 Different configuration for listener REPLICATION listener.name.replication.ssl.keystore.location=/path/to/keystore/replication.jks listener.name.replication.ssl.keystore.password=123456",
"listeners=UNENCRYPTED://:9092,ENCRYPTED://:9093,REPLICATION://:9094 listener.security.protocol.map=UNENCRYPTED:PLAINTEXT,ENCRYPTED:SSL,REPLICATION:PLAINTEXT ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456",
"ssl.truststore.location=/path/to/truststore.jks ssl.truststore.password=123456 ssl.client.auth=required",
"KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };",
"listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=PLAIN",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/kafka-server-start.sh -daemon ./config/server.properties",
"KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required; };",
"listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=SCRAM-SHA-512",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/kafka-server-start.sh -daemon ./config/server.properties",
"KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_server.keytab\" principal=\"kafka/[email protected]\"; org.apache.kafka.common.security.scram.ScramLoginModule required; };",
"sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512",
"bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-512=[password=changeit]' --entity-type users --entity-name kafka",
"sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512",
"KafkaServer { org.apache.kafka.common.security.plain.ScramLoginModule required username=\"admin\" password=\"123456\" # };",
"./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --alter --add-config 'SCRAM-SHA-512=[password=<password>]' --entity-type users --entity-name <username>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name user1",
"--bootstrap-server <broker_host>:<port> --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name <username>",
"--bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name user1",
"zookeeper.connect= node1.example.redhat.com :2181",
"/opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab",
"Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" 4 principal=\"zookeeper/[email protected]\"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; };",
"requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20",
"export EXTRA_ARGS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; };",
"broker.id=0 listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5",
"export KAFKA_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \\ 4 useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/producer1.keytab\" principal=\"producer1/[email protected]\";",
"sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/consumer1.keytab\" principal=\"consumer1/[email protected]\";",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"authorizer.class.name=kafka.security.authorizer.AclAuthorizer",
"super.users=User:admin,User:operator",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1",
"opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1",
"Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"kafka\" password=\"123456\"; };",
"Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\" <Username> \" password=\" <Password> \"; };",
"Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"kafka\" password=\"123456\"; };",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/kafka-server-start.sh -daemon ./config/server.properties",
"zookeeper.set.acl=true",
"zookeeper.set.acl=true",
"zookeeper.set.acl=true",
"KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=<zookeeper_url>"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-securing-kafka-str |
Preface | Preface This guide describes the administration of automation controller through custom scripts, management jobs, and more. Written for DevOps engineers and administrators, the Configuring automation execution guide assumes a basic understanding of the systems requiring management with automation controllers easy-to-use graphical interface. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/pr01 |
Chapter 5. Investigating the pods of the RHOSO High Availability services | Chapter 5. Investigating the pods of the RHOSO High Availability services The operator of each Red Hat OpenStack Services on OpenShift (RHOSO) High Availability service monitors the status of the pods that they manage. If necessary, an operator will take an appropriate action with the aim of keeping at least one replica of the service with a status of Running . You can use the following command to check the status and availability of all the pods of the Galera, RabbitMQ, and memcached shared control plane services: You can use the following command to investigate a specific pod from this list, typically to determine why a pod cannot be started: Replace <pod-name> with the name of the pod from the list of pods that you want more information about. In the following example the rabbitmq-server-0 pod is being investigated: 5.1. Understanding the Taint/Toleration based pod eviction process Red Hat OpenShift Container Platform (RHOCP) implements a Taint/Toleration based pod eviction process, which determines how individual pods are evicted from worker nodes. Pods that are evicted are rescheduled on different nodes. RHOCP assigns taints to specific node conditions, such as not-ready and unreachable . When a node experiences one of these conditions, RHOCP automatically taints the node. After a node has become tainted, the pods must determine if they can tolerate this taint or not: Any pod that does not tolerate the taint is evicted immediately. Any pod that does tolerate the taint will never be evicted, unless the pod has a limited toleration to this taint: The tolerationSeconds parameter specifies the limited toleration of a pod, which is how long a pod can tolerate a specific taint (node condition) and remain connected to the node. If the node condition still exists after the specified tolerationSeconds period, the taint remains on the node and the pod is evicted. If the node condition clears before the specified tolerationSeconds period, then the pod is not evicted. RHOCP adds a default toleration for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints of five minutes, by setting tolerationSeconds=300 . Important RHOSO 18.0 operators do not modify the default tolerations for taints, therefore pods that run on a tainted worker node take more than five minutes to be rescheduled. For more information about RHOCP node remediation, fencing, and maintenance, see Workload Availability remediation, fencing, and maintenance . | [
"get pods |egrep -e \"galera|rabbit|memcache\" NAME READY STATUS RESTARTS AGE memcached-0 1/1 Running 0 3h11m memcached-1 1/1 Running 0 3h11m memcached-2 1/1 Running 0 3h11m openstack-cell1-galera-0 1/1 Running 0 3h11m openstack-cell1-galera-1 1/1 Running 0 3h11m openstack-cell1-galera-2 1/1 Running 0 3h11m openstack-galera-0 1/1 Running 0 3h11m openstack-galera-1 1/1 Running 0 3h11m openstack-galera-2 1/1 Running 0 3h11m rabbitmq-cell1-server-0 1/1 Running 0 3h11m rabbitmq-cell1-server-1 1/1 Running 0 3h11m rabbitmq-cell1-server-2 1/1 Running 0 3h11m rabbitmq-server-0 1/1 Running 0 3h11m rabbitmq-server-1 1/1 Running 0 3h11m rabbitmq-server-2 1/1 Running 0 3h11m",
"oc describe pod/<pod-name>",
"oc describe pod/rabbitmq-server-0 Name: rabbitmq-server-0 Namespace: openstack Priority: 0 Service Account: rabbitmq-server Node: master-2/192.168.111.22 Start Time: Thu, 21 Mar 2024 08:39:57 -0400 Labels: app.kubernetes.io/component=rabbitmq app.kubernetes.io/name=rabbitmq app.kubernetes.io/part-of=rabbitmq controller-revision-hash=rabbitmq-server-5c886b79b4 statefulset.kubernetes.io/pod-name=rabbitmq-server-0 Annotations: k8s.ovn.org/pod-networks: {\"default\":{\"ip_addresses\":[\"192.168.16.35/22\"],\"mac_address\":\"0a:58:c0:a8:10:23\",\"gateway_ips\":[\"192.168.16.1\"],\"routes\":[{\"dest\":\"192.16 k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"192.168.16.35\" ], \"mac\": \"0a:58:c0:a8:10:23\", \"default\": true, \"dns\": {} }] openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/monitoring_high_availability_services/proc_investigating-the-pods-of-the-rhoso-ha-services_ha-monitoring |
Chapter 15. Uninstalling a cluster on GCP | Chapter 15. Uninstalling a cluster on GCP You can remove a cluster that you deployed to Google Cloud Platform (GCP). 15.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 15.2. Deleting Google Cloud Platform resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Google Cloud Platform (GCP) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on GCP that uses short-term credentials. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --to=<path_to_directory_for_credentials_requests> 2 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Delete the GCP resources that ccoctl created by running the following command: USD ccoctl gcp delete \ --name=<name> \ 1 --project=<gcp_project_id> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ --force-delete-custom-roles 3 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <gcp_project_id> is the GCP project ID in which to delete cloud resources. 3 Optional: This parameter deletes the custom roles that the ccoctl utility creates during installation. GCP does not permanently delete custom roles immediately. For more information, see GCP documentation about deleting a custom role . Verification To verify that the resources are deleted, query GCP. For more information, refer to GCP documentation. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> --force-delete-custom-roles 3"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/uninstalling-cluster-gcp |
5.3.9. Activating and Deactivating Volume Groups | 5.3.9. Activating and Deactivating Volume Groups When you create a volume group it is, by default, activated. This means that the logical volumes in that group are accessible and subject to change. There are various circumstances for which you need to make a volume group inactive and thus unknown to the kernel. To deactivate or activate a volume group, use the -a ( --available ) argument of the vgchange command. The following example deactivates the volume group my_volume_group . If clustered locking is enabled, add 'e' to activate or deactivate a volume group exclusively on one node or 'l' to activate or/deactivate a volume group only on the local node. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. You can deactivate individual logical volumes with the lvchange command, as described in Section 5.4.10, "Changing the Parameters of a Logical Volume Group" , For information on activating logical volumes on individual nodes in a cluster, see Section 5.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . | [
"vgchange -a n my_volume_group"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/VG_activate |
Chapter 13. Troubleshooting and maintaining the Load-balancing service | Chapter 13. Troubleshooting and maintaining the Load-balancing service Basic troubleshooting and maintenance for the Load-balancing service (octavia) starts with being familiar with the OpenStack client commands for showing status and migrating instances, and knowing how to access logs. If you need to troubleshoot more in depth, you can SSH into one or more Load-balancing service instances (amphorae). Section 13.1, "Verifying the load balancer" Section 13.2, "Load-balancing service instance administrative logs" Section 13.3, "Migrating a specific Load-balancing service instance" Section 13.4, "Using SSH to connect to load-balancing instances" Section 13.5, "Showing listener statistics" Section 13.6, "Interpreting listener request errors" 13.1. Verifying the load balancer You can troubleshoot the Load-balancing service (octavia) and its various components by viewing the output of the load balancer show and list commands. Procedure Source your credentials file. Example Verify the load balancer ( lb1 ) settings. Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Sample output Using the loadbalancer ID ( 265d0b71-c073-40f4-9718-8a182c6d53ca ) from the step, obtain the ID of the amphora associated with the load balancer ( lb1 ). Example Sample output Using the amphora ID ( 1afabefd-ba09-49e1-8c39-41770aa25070 ) from the step, view amphora information. Example Sample output View the listener ( listener1 ) details. Example Sample output View the pool ( pool1 ) and load-balancer members. Example Sample output Verify HTTPS traffic flows across a load balancer whose listener is configured for HTTPS or TERMINATED_HTTPS protocols by connecting to the VIP address ( 192.0.2.177 ) of the load balancer. Tip Obtain the load-balancer VIP address by using the command, openstack loadbalancer show <load_balancer_name> . Note Security groups implemented for the load balancer VIP only allow data traffic for the required protocols and ports. For this reason you cannot ping load balancer VIPs, because ICMP traffic is blocked. Example Sample output Additional resources loadbalancer in the Command Line Interface Reference 13.2. Load-balancing service instance administrative logs The administrative log offloading feature of the Load-balancing service instance (amphora) covers all of the system logging inside the amphora except for the tenant flow logs. You can send tenant flow logs to the same syslog receiver where the administrative logs are sent. You can send tenant flow logs to the same syslog receiver that processes the administrative logs, but you must configure the tenant flow logs separately. The amphora sends all administrative log messages by using the native log format for the application sending the message. The amphorae log to the Red Hat OpenStack Platform (RHOSP) Controller node in the same location as the other RHOSP logs ( /var/log/containers/octavia/ ). Additional resources Chapter 5, Managing Load-balancing service instance logs 13.3. Migrating a specific Load-balancing service instance In some cases you must migrate a Load-balancing service instance (amphora). For example, if the host is being shut down for maintenance Procedure Source your credentials file. Example Locate the ID of the amphora that you want to migrate. You need to provide the ID in a later step. To prevent the Compute scheduler service from scheduling any new amphorae to the Compute node being evacuated, disable the Compute node ( compute-host-1 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Fail over the amphora by using the amphora ID ( ea17210a-1076-48ff-8a1f-ced49ccb5e53 ) that you obtained. Example Additional resources compute service set in the Command Line Interface Reference loadbalancer in the Command Line Interface Reference 13.4. Using SSH to connect to load-balancing instances Use SSH to log in to Load-balancing service instances (amphorae) when troubleshooting service problems. It can be helpful to use Secure Shell (SSH) to log into running Load-balancing service instances (amphorae) when troubleshooting service problems. Prerequisites You must have the Load-balancing service (octavia) SSH private key. Procedure On the director node, start ssh-agent and add your user identity key to the agent: Source your credentials file. Example Determine the IP address on the load-balancing management network ( lb_network_ip ) for the amphora that you want to connect to: Use SSH to connect to the amphora: When you are finished, close your connection to the amphora and stop the SSH agent: Additional resources loadbalancer in the Command Line Interface Reference 13.5. Showing listener statistics Using the OpenStack Client, you can obtain statistics about the listener for a particular Red Hat OpenStack Platform (RHOSP) loadbalancer: current active connections ( active_connections ). total bytes received ( bytes_in ). total bytes sent ( bytes_out ). total requests that were unable to be fulfilled ( request_errors ). total connections handled ( total_connections ). Procedure Source your credentials file. Example View the stats for the listener ( listener1 ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with values that are appropriate for your site. Example Tip If you do not know the name of the listener, enter the command loadbalancer listener list . Sample output Additional resources loadbalancer listener stats show in the Command Line Interface Reference Section 13.6, "Interpreting listener request errors" 13.6. Interpreting listener request errors You can obtain statistics about the listener for a particular Red Hat OpenStack Platform (RHOSP) loadbalancer. For more information, see Section 13.5, "Showing listener statistics" . One of the statistics tracked by the RHOSP loadbalancer, request_errors , is only counting errors that occurred in the request from the end user connecting to the load balancer. The request_errors variable is not measuring errors reported by the member server. For example, if a tenant connects through the RHOSP Load-balancing service (octavia) to a web server that returns an HTTP status code of 400 (Bad Request) , this error is not collected by the Load-balancing service. Loadbalancers do not inspect the content of data traffic. In this example, the loadbalancer interprets this flow as successful because it transported information between the user and the web server correctly. The following conditions can cause the request_errors variable to increment: early termination from the client, before the request has been sent. read error from the client. client timeout. client closed the connection. various bad requests from the client. Additional resources loadbalancer listener stats show in the Command Line Interface Reference Section 13.5, "Showing listener statistics" | [
"source ~/overcloudrc",
"openstack loadbalancer show lb1",
"+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-02-17T15:59:18 | | description | | | flavor_id | None | | id | 265d0b71-c073-40f4-9718-8a182c6d53ca | | listeners | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | name | lb1 | | operating_status | ONLINE | | pools | 48f6664c-b192-4763-846a-da568354da4a | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2022-02-17T16:01:21 | | vip_address | 192.0.2.177 | | vip_network_id | afeaf55e-7128-4dff-80e2-98f8d1f2f44c | | vip_port_id | 94a12275-1505-4cdc-80c9-4432767a980f | | vip_qos_policy_id | None | | vip_subnet_id | 06ffa90e-2b86-4fe3-9731-c7839b0be6de | +---------------------+--------------------------------------+",
"openstack loadbalancer amphora list | grep 265d0b71-c073-40f4-9718-8a182c6d53ca",
"| 1afabefd-ba09-49e1-8c39-41770aa25070 | 265d0b71-c073-40f4-9718-8a182c6d53ca | ALLOCATED | STANDALONE | 198.51.100.7 | 192.0.2.177 |",
"openstack loadbalancer amphora show 1afabefd-ba09-49e1-8c39-41770aa25070",
"+-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | id | 1afabefd-ba09-49e1-8c39-41770aa25070 | | loadbalancer_id | 265d0b71-c073-40f4-9718-8a182c6d53ca | | compute_id | ba9fc1c4-8aee-47ad-b47f-98f12ea7b200 | | lb_network_ip | 198.51.100.7 | | vrrp_ip | 192.0.2.36 | | ha_ip | 192.0.2.177 | | vrrp_port_id | 07dcd894-487a-48dc-b0ec-7324fe5d2082 | | ha_port_id | 94a12275-1505-4cdc-80c9-4432767a980f | | cert_expiration | 2022-03-19T15:59:23 | | cert_busy | False | | role | STANDALONE | | status | ALLOCATED | | vrrp_interface | None | | vrrp_id | 1 | | vrrp_priority | None | | cached_zone | nova | | created_at | 2022-02-17T15:59:22 | | updated_at | 2022-02-17T16:00:50 | | image_id | 53001253-5005-4891-bb61-8784ae85e962 | | compute_flavor | 65 | +-----------------+--------------------------------------+",
"openstack loadbalancer listener show listener1",
"+-----------------------------+--------------------------------------+ | Field | Value | +-----------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2022-02-17T16:00:59 | | default_pool_id | 48f6664c-b192-4763-846a-da568354da4a | | default_tls_container_ref | None | | description | | | id | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | insert_headers | None | | l7policies | | | loadbalancers | 265d0b71-c073-40f4-9718-8a182c6d53ca | | name | listener1 | | operating_status | ONLINE | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | protocol | HTTP | | protocol_port | 80 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2022-02-17T16:01:21 | | client_ca_tls_container_ref | None | | client_authentication | NONE | | client_crl_container_ref | None | | allowed_cidrs | None | +-----------------------------+--------------------------------------+",
"openstack loadbalancer pool show pool1",
"+----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2022-02-17T16:01:08 | | description | | | healthmonitor_id | 4b24180f-74c7-47d2-b0a2-4783ada9a4f0 | | id | 48f6664c-b192-4763-846a-da568354da4a | | lb_algorithm | ROUND_ROBIN | | listeners | 5aaa67da-350d-4125-9022-238e0f7b7f6f | | loadbalancers | 265d0b71-c073-40f4-9718-8a182c6d53ca | | members | b92694bd-3407-461a-92f2-90fb2c4aedd1 | | | 4ccdd1cf-736d-4b31-b67c-81d5f49e528d | | name | pool1 | | operating_status | ONLINE | | project_id | 52376c9c5c2e434283266ae7cacd3a9c | | protocol | HTTP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2022-02-17T16:01:21 | | tls_container_ref | None | | ca_tls_container_ref | None | | crl_container_ref | None | | tls_enabled | False | +----------------------+--------------------------------------+",
"curl -v https://192.0.2.177 --insecure",
"* About to connect() to 192.0.2.177 port 443 (#0) * Trying 192.0.2.177 * Connected to 192.0.2.177 (192.0.2.177) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US * start date: Jan 15 09:21:45 2021 GMT * expire date: Jan 15 09:21:45 2021 GMT * common name: www.example.com * issuer: CN=www.example.com,O=Dis,L=Springfield,ST=Denial,C=US > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 192.0.2.177 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 30 < * Connection #0 to host 192.0.2.177 left intact",
"source ~/overcloudrc",
"openstack loadbalancer amphora list",
"openstack compute service set compute-host-1 nova-compute --disable",
"openstack loadbalancer amphora failover ea17210a-1076-48ff-8a1f-ced49ccb5e53",
"eval USD(ssh-agent -s) ssh-add",
"source ~/overcloudrc",
"openstack loadbalancer amphora list",
"ssh -A -t heat-admin@<controller_node_IP_address> ssh cloud-user@<lb_network_ip>",
"exit",
"source ~/overcloudrc",
"openstack loadbalancer listener stats show listener1",
"+--------------------+-------+ | Field | Value | +--------------------+-------+ | active_connections | 0 | | bytes_in | 0 | | bytes_out | 0 | | request_errors | 0 | | total_connections | 0 | +--------------------+-------+"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/using_octavia_for_load_balancing-as-a-service/troubleshoot-maintain-lb-service_rhosp-lbaas |
Chapter 3. Red Hat build of Keycloak Realm Import | Chapter 3. Red Hat build of Keycloak Realm Import 3.1. Importing a Red Hat build of Keycloak Realm Using the Red Hat build of Keycloak Operator, you can perform a realm import for the Keycloak Deployment. Note If a Realm with the same name already exists in Red Hat build of Keycloak, it will not be overwritten. The Realm Import CR only supports creation of new realms and does not update or delete those. Changes to the realm performed directly on Red Hat build of Keycloak are not synced back in the CR. 3.1.1. Creating a Realm Import Custom Resource The following is an example of a Realm Import Custom Resource (CR): apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm: ... This CR should be created in the same namespace as the Keycloak Deployment CR, defined in the field keycloakCRName . The realm field accepts a full RealmRepresentation . The recommended way to obtain a RealmRepresentation is by leveraging the export functionality Importing and Exporting Realms . Export the Realm to a single file. Convert the JSON file to YAML. Copy and paste the obtained YAML file as body for the realm key, making sure the indentation is correct. 3.1.2. Applying the Realm Import CR Use oc to create the CR in the correct cluster namespace: Create YAML file example-realm-import.yaml : apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm: id: example-realm realm: example-realm displayName: ExampleRealm enabled: true Apply the changes: oc apply -f example-realm-import.yaml To check the status of the running import, enter the following command: oc get keycloakrealmimports/my-realm-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{"\n"}} STATUS: {{.status}}{{"\n"}} MESSAGE: {{.message}}{{"\n"}}{{end}}' When the import has successfully completed, the output will look like the following example: CONDITION: Done STATUS: true MESSAGE: CONDITION: Started STATUS: false MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE: | [
"apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm:",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm: id: example-realm realm: example-realm displayName: ExampleRealm enabled: true",
"apply -f example-realm-import.yaml",
"get keycloakrealmimports/my-realm-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{\"\\n\"}} STATUS: {{.status}}{{\"\\n\"}} MESSAGE: {{.message}}{{\"\\n\"}}{{end}}'",
"CONDITION: Done STATUS: true MESSAGE: CONDITION: Started STATUS: false MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE:"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/operator_guide/realm-import- |
Chapter 9. PrometheusRule [monitoring.coreos.com/v1] | Chapter 9. PrometheusRule [monitoring.coreos.com/v1] Description PrometheusRule defines recording and alerting rules for a Prometheus instance Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired alerting rule definitions for Prometheus. 9.1.1. .spec Description Specification of desired alerting rule definitions for Prometheus. Type object Property Type Description groups array Content of Prometheus rule file groups[] object RuleGroup is a list of sequentially evaluated recording and alerting rules. 9.1.2. .spec.groups Description Content of Prometheus rule file Type array 9.1.3. .spec.groups[] Description RuleGroup is a list of sequentially evaluated recording and alerting rules. Type object Required name Property Type Description interval string Interval determines how often rules in the group are evaluated. limit integer Limit the number of alerts an alerting rule and series a recording rule can produce. Limit is supported starting with Prometheus >= 2.31 and Thanos Ruler >= 0.24. name string Name of the rule group. partial_response_strategy string PartialResponseStrategy is only used by ThanosRuler and will be ignored by Prometheus instances. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response rules array List of alerting and recording rules. rules[] object Rule describes an alerting or recording rule See Prometheus documentation: [alerting]( https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) or [recording]( https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules ) rule 9.1.4. .spec.groups[].rules Description List of alerting and recording rules. Type array 9.1.5. .spec.groups[].rules[] Description Rule describes an alerting or recording rule See Prometheus documentation: [alerting]( https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) or [recording]( https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules ) rule Type object Required expr Property Type Description alert string Name of the alert. Must be a valid label value. Only one of record and alert must be set. annotations object (string) Annotations to add to each alert. Only valid for alerting rules. expr integer-or-string PromQL expression to evaluate. for string Alerts are considered firing once they have been returned for this long. keep_firing_for string KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared. labels object (string) Labels to add or overwrite. record string Name of the time series to output to. Must be a valid metric name. Only one of record and alert must be set. 9.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheusrules GET : list objects of kind PrometheusRule /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules DELETE : delete collection of PrometheusRule GET : list objects of kind PrometheusRule POST : create a PrometheusRule /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules/{name} DELETE : delete a PrometheusRule GET : read the specified PrometheusRule PATCH : partially update the specified PrometheusRule PUT : replace the specified PrometheusRule 9.2.1. /apis/monitoring.coreos.com/v1/prometheusrules HTTP method GET Description list objects of kind PrometheusRule Table 9.1. HTTP responses HTTP code Reponse body 200 - OK PrometheusRuleList schema 401 - Unauthorized Empty 9.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules HTTP method DELETE Description delete collection of PrometheusRule Table 9.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PrometheusRule Table 9.3. HTTP responses HTTP code Reponse body 200 - OK PrometheusRuleList schema 401 - Unauthorized Empty HTTP method POST Description create a PrometheusRule Table 9.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.5. Body parameters Parameter Type Description body PrometheusRule schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 201 - Created PrometheusRule schema 202 - Accepted PrometheusRule schema 401 - Unauthorized Empty 9.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the PrometheusRule HTTP method DELETE Description delete a PrometheusRule Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PrometheusRule Table 9.10. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PrometheusRule Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PrometheusRule Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body PrometheusRule schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 201 - Created PrometheusRule schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/prometheusrule-monitoring-coreos-com-v1 |
Chapter 1. Documentation moved | Chapter 1. Documentation moved The OpenShift sandboxed containers user guide and release notes have moved to a new location . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/openshift_sandboxed_containers/sandboxed-containers-moved |
Pipelines | Pipelines OpenShift Container Platform 4.16 A cloud-native continuous integration and continuous delivery solution based on Kubernetes resources Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/pipelines/index |
F.10. Hash Space Allocation | F.10. Hash Space Allocation F.10.1. About Hash Space Allocation Red Hat JBoss Data Grid is responsible for allocating a portion of the total available hash space to each node. During subsequent operations that must store an entry, JBoss Data Grid creates a hash of the relevant key and stores the entry on the node that owns that portion of hash space. Report a bug F.10.2. Locating a Key in the Hash Space Red Hat JBoss Data Grid always uses an algorithm to locate a key in the hash space. As a result, the node that stores the key is never manually specified. This scheme allows any node to know which node owns a particular key without such ownership information being distributed. This scheme reduces the amount of overhead and, more importantly, improves redundancy because the ownership information does not need to be replicated in case of node failure. Report a bug F.10.3. Requesting a Full Byte Array How can I request the Red Hat JBoss Data Grid return a full byte array instead of partial byte array contents? As a default, JBoss Data Grid only partially prints byte arrays to logs to avoid unnecessarily printing large byte arrays. This occurs when either: JBoss Data Grid caches are configured for lazy deserialization. Lazy deserialization is not available in JBoss Data Grid's Remote Client-Server mode. A Memcached or Hot Rod server is run. In such cases, only the first ten positions of the byte array display in the logs. To display the complete contents of the byte array in the logs, pass the -Dinfinispan.arrays.debug=true system property at start up. Example F.1. Partial Byte Array Log Report a bug | [
"2010-04-14 15:46:09,342 TRACE [ReadCommittedEntry] (HotRodWorker-1-1) Updating entry (key=CacheKey{data=ByteArray{size=19, hashCode=1b3278a, array=[107, 45, 116, 101, 115, 116, 82, 101, 112, 108, ..]}} removed=false valid=true changed=true created=true value=CacheValue{data=ByteArray{size=19, array=[118, 45, 116, 101, 115, 116, 82, 101, 112, 108, ..]}, version=281483566645249}] And here's a log message where the full byte array is shown: 2010-04-14 15:45:00,723 TRACE [ReadCommittedEntry] (Incoming-2,Infinispan-Cluster,eq-6834) Updating entry (key=CacheKey{data=ByteArray{size=19, hashCode=6cc2a4, array=[107, 45, 116, 101, 115, 116, 82, 101, 112, 108, 105, 99, 97, 116, 101, 100, 80, 117, 116]}} removed=false valid=true changed=true created=true value=CacheValue{data=ByteArray{size=19, array=[118, 45, 116, 101, 115, 116, 82, 101, 112, 108, 105, 99, 97, 116, 101, 100, 80, 117, 116]}, version=281483566645249}]"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-hash_space_allocation |
16.2. Setting up Squid as a Caching Proxy With LDAP Authentication | 16.2. Setting up Squid as a Caching Proxy With LDAP Authentication This section describes a basic configuration of Squid as a caching proxy that uses LDAP to authenticate users. The procedure configures that only authenticated users can use the proxy. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. An service user, such as uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com exists in the LDAP directory. Squid uses this account only to search for the authenticating user. If the authenticating user exists, Squid binds as this user to the directory to verify the authentication. Procedure Install the squid package: Edit the /etc/squid/squid.conf file: To configure the basic_ldap_auth helper utility, add the following configuration entry to the top of /etc/squid/squid.conf : The following describes the parameters passed to the basic_ldap_auth helper utility in the example above: -B base_DN sets the LDAP search base. -D proxy_service_user_DN sets the distinguished name (DN) of the account Squid uses to search for the authenticating user in the directory. -W path_to_password_file sets the path to the file that contains the password of the proxy service user. Using a password file prevents that the password is visible in the operating system's process list. -f LDAP_filter specifies the LDAP search filter. Squid replaces the %s variable with the user name provided by the authenticating user. The (&(objectClass=person)(uid=%s)) filter in the example defines that the user name must match the value set in the uid attribute and that the directory entry contains the person object class. -ZZ enforces a TLS-encrypted connection over the LDAP protocol using the STARTTLS command. Omit the -ZZ in the following situations: The LDAP server does not support encrypted connections. The port specified in the URL uses the LDAPS protocol. The -H LDAP_URL parameter specifies the protocol, the host name or IP address, and the port of the LDAP server in URL format. Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy: Important Specify these settings before the http_access deny all rule. Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in localnet ACLs: The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Store the password of the LDAP service user in the /etc/squid/ldap_password file, and set appropriate permissions for the file: Open the 3128 port in the firewall: Start the squid service: Enable the squid service to start automatically when the system boots: Verification Steps To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file was downloaded to the current directory, the proxy works. Troubleshooting Steps To verify that the helper utility works correctly: Manually start the helper utility with the same settings you used in the auth_param parameter: Enter a valid user name and password, and press Enter : If the helper utility returns OK , authentication succeeded. | [
"yum install squid",
"auth_param basic program /usr/lib64/squid/basic_ldap_auth -b \"cn=users,cn=accounts,dc=example,dc=com\" -D \" uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com \" -W /etc/squid/ldap_password -f \" (&(objectClass=person)(uid=%s)) \" -ZZ -H ldap://ldap_server.example.com:389",
"acl ldap-auth proxy_auth REQUIRED http_access allow ldap-auth",
"http_access allow localnet",
"acl SSL_ports port 443",
"acl SSL_ports port port_number",
"acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443",
"cache_dir ufs /var/spool/squid 10000 16 256",
"mkdir -p path_to_cache_directory",
"chown squid:squid path_to_cache_directory",
"semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory",
"echo \" password \" > /etc/squid/ldap_password chown root:squid /etc/squid/ldap_password chmod 640 /etc/squid/ldap_password",
"firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload",
"systemctl start squid",
"systemctl enable squid",
"curl -O -L \" https://www.redhat.com/index.html \" -x \" user_name:[email protected] : 3128 \"",
"/usr/lib64/squid/basic_ldap_auth -b \"cn=users,cn=accounts,dc=example,dc=com\" -D \" uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com \" -W /etc/squid/ldap_password -f \" (&(objectClass=person)(uid=%s)) \" -ZZ -H ldap://ldap_server.example.com:389",
"user_name password"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/setting-up-squid-as-a-caching-proxy-with-ldap-authentication |
Chapter 2. Installing metering | Chapter 2. Installing metering Important Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Review the following sections before installing metering into your cluster. To get started installing metering, first install the Metering Operator from OperatorHub. , configure your instance of metering by creating a MeteringConfig custom resource (CR). Installing the Metering Operator creates a default MeteringConfig resource that you can modify using the examples in the documentation. After creating your MeteringConfig resource, install the metering stack. Last, verify your installation. 2.1. Prerequisites Metering requires the following components: A StorageClass resource for dynamic volume provisioning. Metering supports a number of different storage solutions. 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available. The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores. Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters. 2.2. Installing the Metering Operator You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack. Note You cannot create a project starting with openshift- using the web console or by using the oc new-project command in the CLI. Note If the Metering Operator is installed using a namespace other than openshift-metering , the metering reports are only viewable using the CLI. It is strongly suggested throughout the installation steps to use the openshift-metering namespace. 2.2.1. Installing metering using the web console You can use the OpenShift Container Platform web console to install the Metering Operator. Procedure Create a namespace object YAML file for the Metering Operator with the oc create -f <file-name>.yaml command. You must use the CLI to create the namespace. For example, metering-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-metering 1 annotations: openshift.io/node-selector: "" 2 labels: openshift.io/cluster-monitoring: "true" 1 It is strongly recommended to deploy metering in the openshift-metering namespace. 2 Include this annotation before configuring specific node selectors for the operand pods. In the OpenShift Container Platform web console, click Operators OperatorHub . Filter for metering to find the Metering Operator. Click the Metering card, review the package description, and then click Install . Select an Update Channel , Installation Mode , and Approval Strategy . Click Install . Verify that the Metering Operator is installed by switching to the Operators Installed Operators page. The Metering Operator has a Status of Succeeded when the installation is complete. Note It might take several minutes for the Metering Operator to appear. Click Metering on the Installed Operators page for Operator Details . From the Details page you can create different resources related to metering. To complete the metering installation, create a MeteringConfig resource to configure metering and install the components of the metering stack. 2.2.2. Installing metering using the CLI You can use the OpenShift Container Platform CLI to install the Metering Operator. Procedure Create a Namespace object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, metering-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-metering 1 annotations: openshift.io/node-selector: "" 2 labels: openshift.io/cluster-monitoring: "true" 1 It is strongly recommended to deploy metering in the openshift-metering namespace. 2 Include this annotation before configuring specific node selectors for the operand pods. Create the Namespace object: USD oc create -f <file-name>.yaml For example: USD oc create -f openshift-metering.yaml Create the OperatorGroup object YAML file. For example, metering-og : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-metering 1 namespace: openshift-metering 2 spec: targetNamespaces: - openshift-metering 1 The name is arbitrary. 2 Specify the openshift-metering namespace. Create a Subscription object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the redhat-operators catalog source. For example, metering-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metering-ocp 1 namespace: openshift-metering 2 spec: channel: "4.7" 3 source: "redhat-operators" 4 sourceNamespace: "openshift-marketplace" name: "metering-ocp" installPlanApproval: "Automatic" 5 1 The name is arbitrary. 2 You must specify the openshift-metering namespace. 3 Specify 4.7 as the channel. 4 Specify the redhat-operators catalog source, which contains the metering-ocp package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 5 Specify "Automatic" install plan approval. 2.3. Installing the metering stack After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack. 2.4. Prerequisites Review the configuration options Create a MeteringConfig resource. You can begin the following process to generate a default MeteringConfig resource, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create your MeteringConfig resource: For configuration options, review About configuring metering . At a minimum, you need to configure persistent storage and configure the Hive metastore . Important There can only be one MeteringConfig resource in the openshift-metering namespace. Any other configuration is not supported. Procedure From the web console, ensure you are on the Operator Details page for the Metering Operator in the openshift-metering project. You can navigate to this page by clicking Operators Installed Operators , then selecting the Metering Operator. Under Provided APIs , click Create Instance on the Metering Configuration card. This opens a YAML editor with the default MeteringConfig resource file where you can define your configuration. Note For example configuration files and all supported configuration options, review the configuring metering documentation . Enter your MeteringConfig resource into the YAML editor and click Create . The MeteringConfig resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation. 2.5. Verifying the metering installation You can verify the metering installation by performing any of the following checks: Check the Metering Operator ClusterServiceVersion (CSV) resource for the metering version. This can be done through either the web console or CLI. Procedure (UI) Navigate to Operators Installed Operators in the openshift-metering namespace. Click Metering Operator . Click Subscription for Subscription Details . Check the Installed Version . Procedure (CLI) Check the Metering Operator CSV in the openshift-metering namespace: USD oc --namespace openshift-metering get csv Example output NAME DISPLAY VERSION REPLACES PHASE elasticsearch-operator.4.7.0-202006231303.p0 OpenShift Elasticsearch Operator 4.7.0-202006231303.p0 Succeeded metering-operator.v4.7.0 Metering 4.7.0 Succeeded Check that all required pods in the openshift-metering namespace are created. This can be done through either the web console or CLI. Note Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation. Procedure (UI) Navigate to Workloads Pods in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack. Procedure (CLI) Check that all required pods in the openshift-metering namespace are created: USD oc -n openshift-metering get pods Example output NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s Verify that the ReportDataSource resources are beginning to import data, indicated by a valid timestamp in the EARLIEST METRIC column. This might take several minutes. Filter out the "-raw" ReportDataSource resources, which do not import data: USD oc get reportdatasources -n openshift-metering | grep -v raw Example output NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. 2.6. Additional resources For more information on configuration steps and available storage platforms, see Configuring persistent storage . For the steps to configure Hive, see Configuring the Hive metastore . | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-metering 1 annotations: openshift.io/node-selector: \"\" 2 labels: openshift.io/cluster-monitoring: \"true\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-metering 1 annotations: openshift.io/node-selector: \"\" 2 labels: openshift.io/cluster-monitoring: \"true\"",
"oc create -f <file-name>.yaml",
"oc create -f openshift-metering.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-metering 1 namespace: openshift-metering 2 spec: targetNamespaces: - openshift-metering",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metering-ocp 1 namespace: openshift-metering 2 spec: channel: \"4.7\" 3 source: \"redhat-operators\" 4 sourceNamespace: \"openshift-marketplace\" name: \"metering-ocp\" installPlanApproval: \"Automatic\" 5",
"oc --namespace openshift-metering get csv",
"NAME DISPLAY VERSION REPLACES PHASE elasticsearch-operator.4.7.0-202006231303.p0 OpenShift Elasticsearch Operator 4.7.0-202006231303.p0 Succeeded metering-operator.v4.7.0 Metering 4.7.0 Succeeded",
"oc -n openshift-metering get pods",
"NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s",
"oc get reportdatasources -n openshift-metering | grep -v raw",
"NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/metering/installing-metering |
3.2. Listing Data Centers | 3.2. Listing Data Centers This Ruby example lists the data centers. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of data centers: dcs_service = system_service.data_centers_service # Retrieve the list of data centers and for each one # print its name: dcs = dcs_service.list dcs.each do |dc| puts dc.name end In an environment with only the Default data center, the example outputs: For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/DataCentersService:list . | [
"Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of data centers: dcs_service = system_service.data_centers_service Retrieve the list of data centers and for each one print its name: dcs = dcs_service.list dcs.each do |dc| puts dc.name end",
"Default"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/listing_data_centers |
4.8. SNMP | 4.8. SNMP In Red Hat Enterprise Linux 6.4 and , Net-SNMP shipped its configuration file readable to any user on the system. Since configuration files can contain sensitive information like passwords, as of Red Hat Enterprise Linux 6.5, the configuration file is readable only by root. This change affects user scripts that attempt to access the SNMP configuration file, /etc/snmp/snmpd.conf . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-networking-snmp |
16.7.2. Application Access | 16.7.2. Application Access The console user is also allowed access to certain programs with a file bearing the command name in the /etc/security/console.apps/ directory. One notable group of applications the console user has access to are three programs which shut off or reboot the system. These are: /sbin/halt /sbin/reboot /sbin/poweroff Because these are PAM-aware applications, they call the pam_console.so module as a requirement for use. For more information, refer to the Section 16.8.1, "Installed Documentation" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-pam-console-halt |
Chapter 13. Job [batch/v1] | Chapter 13. Job [batch/v1] Description Job represents the configuration of a single job. Type object 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object JobSpec describes how the job execution will look like. status object JobStatus represents the current state of a Job. 13.1.1. .spec Description JobSpec describes how the job execution will look like. Type object Required template Property Type Description activeDeadlineSeconds integer Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again. backoffLimit integer Specifies the number of retries before marking this job failed. Defaults to 6 backoffLimitPerIndex integer Specifies the limit for the number of retries within an index before marking this index as failed. When enabled the number of failures per index is kept in the pod's batch.kubernetes.io/job-index-failure-count annotation. It can only be set when Job's completionMode=Indexed, and the Pod's restart policy is Never. The field is immutable. This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). completionMode string completionMode specifies how Pod completions are tracked. It can be NonIndexed (default) or Indexed . NonIndexed means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other. Indexed means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is Indexed , .spec.completions must be specified and .spec.parallelism must be less than or equal to 10^5. In addition, The Pod name takes the form USD(job-name)-USD(index)-USD(random-string) , the Pod hostname takes the form USD(job-name)-USD(index) . More completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job. Possible enum values: - "Indexed" is a Job completion mode. In this mode, the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1). The Job is considered complete when a Pod completes for each completion index. - "NonIndexed" is a Job completion mode. In this mode, the Job is considered complete when there have been .spec.completions successfully completed Pods. Pod completions are homologous to each other. completions integer Specifies the desired number of successfully finished pods the job should be run with. Setting to null means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value. Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ manualSelector boolean manualSelector controls generation of pod labels and pod selectors. Leave manualSelector unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template. When true, the user is responsible for picking unique labels and specifying the selector. Failure to pick a unique label may cause this and other jobs to not function correctly. However, You may see manualSelector=true in jobs that were created with the old extensions/v1beta1 API. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#specifying-your-own-pod-selector maxFailedIndexes integer Specifies the maximal number of failed indexes before marking the Job as failed, when backoffLimitPerIndex is set. Once the number of failed indexes exceeds this number the entire Job is marked as Failed and its execution is terminated. When left as null the job continues execution of all of its indexes and is marked with the Complete Job condition. It can only be specified when backoffLimitPerIndex is set. It can be null or up to completions. It is required and must be less than or equal to 10^4 when is completions greater than 10^5. This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). parallelism integer Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ podFailurePolicy object PodFailurePolicy describes how failed pods influence the backoffLimit. podReplacementPolicy string podReplacementPolicy specifies when to create replacement Pods. Possible values are: - TerminatingOrFailed means that we recreate pods when they are terminating (has a metadata.deletionTimestamp) or failed. - Failed means to wait until a previously created Pod is fully terminated (has phase Failed or Succeeded) before creating a replacement Pod. When using podFailurePolicy, Failed is the the only allowed value. TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. This is an beta field. To use this, enable the JobPodReplacementPolicy feature toggle. This is on by default. Possible enum values: - "Failed" means to wait until a previously created Pod is fully terminated (has phase Failed or Succeeded) before creating a replacement Pod. - "TerminatingOrFailed" means that we recreate pods when they are terminating (has a metadata.deletionTimestamp) or failed. selector LabelSelector A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors suspend boolean suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. Defaults to false. template PodTemplateSpec Describes the pod that will be created when executing a job. The only allowed template.spec.restartPolicy values are "Never" or "OnFailure". More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ ttlSecondsAfterFinished integer ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. 13.1.2. .spec.podFailurePolicy Description PodFailurePolicy describes how failed pods influence the backoffLimit. Type object Required rules Property Type Description rules array A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. rules[] object PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of onExitCodes and onPodConditions, but not both, can be used in each rule. 13.1.3. .spec.podFailurePolicy.rules Description A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. Type array 13.1.4. .spec.podFailurePolicy.rules[] Description PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of onExitCodes and onPodConditions, but not both, can be used in each rule. Type object Required action Property Type Description action string Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all running pods are terminated. - FailIndex: indicates that the pod's index is marked as Failed and will not be restarted. This value is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). - Ignore: indicates that the counter towards the .backoffLimit is not incremented and a replacement pod is created. - Count: indicates that the pod is handled in the default way - the counter towards the .backoffLimit is incremented. Additional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule. Possible enum values: - "Count" This is an action which might be taken on a pod failure - the pod failure is handled in the default way - the counter towards .backoffLimit, represented by the job's .status.failed field, is incremented. - "FailIndex" This is an action which might be taken on a pod failure - mark the Job's index as failed to avoid restarts within this index. This action can only be used when backoffLimitPerIndex is set. This value is beta-level. - "FailJob" This is an action which might be taken on a pod failure - mark the pod's job as Failed and terminate all running pods. - "Ignore" This is an action which might be taken on a pod failure - the counter towards .backoffLimit, represented by the job's .status.failed field, is not incremented and a replacement pod is created. onExitCodes object PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. onPodConditions array Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. onPodConditions[] object PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. 13.1.5. .spec.podFailurePolicy.rules[].onExitCodes Description PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. Type object Required operator values Property Type Description containerName string Restricts the check for exit codes to the container with the specified name. When null, the rule applies to all containers. When specified, it should match one the container or initContainer names in the pod template. operator string Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is in the set of specified values. - NotIn: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is not in the set of specified values. Additional values are considered to be added in the future. Clients should react to an unknown operator by assuming the requirement is not satisfied. Possible enum values: - "In" - "NotIn" values array (integer) Specifies the set of values. Each returned container exit code (might be multiple in case of multiple containers) is checked against this set of values with respect to the operator. The list of values must be ordered and must not contain duplicates. Value '0' cannot be used for the In operator. At least one element is required. At most 255 elements are allowed. 13.1.6. .spec.podFailurePolicy.rules[].onPodConditions Description Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. Type array 13.1.7. .spec.podFailurePolicy.rules[].onPodConditions[] Description PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. Type object Required type status Property Type Description status string Specifies the required Pod condition status. To match a pod condition it is required that the specified status equals the pod condition status. Defaults to True. type string Specifies the required Pod condition type. To match a pod condition it is required that specified type equals the pod condition type. 13.1.8. .status Description JobStatus represents the current state of a Job. Type object Property Type Description active integer The number of pending and running pods. completedIndexes string completedIndexes holds the completed indexes when .spec.completionMode = "Indexed" in a text format. The indexes are represented as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the completed indexes are 1, 3, 4, 5 and 7, they are represented as "1,3-5,7". completionTime Time Represents time when the job was completed. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC. The completion time is only set when the job finishes successfully. conditions array The latest available observations of an object's current state. When a Job fails, one of the conditions will have type "Failed" and status true. When a Job is suspended, one of the conditions will have type "Suspended" and status true; when the Job is resumed, the status of this condition will become false. When a Job is completed, one of the conditions will have type "Complete" and status true. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ conditions[] object JobCondition describes current state of a job. failed integer The number of pods which reached phase Failed. failedIndexes string FailedIndexes holds the failed indexes when backoffLimitPerIndex=true. The indexes are represented in the text format analogous as for the completedIndexes field, ie. they are kept as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the failed indexes are 1, 3, 4, 5 and 7, they are represented as "1,3-5,7". This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). ready integer The number of pods which have a Ready condition. startTime Time Represents time when the job controller started processing a job. When a Job is created in the suspended state, this field is not set until the first time it is resumed. This field is reset every time a Job is resumed from suspension. It is represented in RFC3339 form and is in UTC. succeeded integer The number of pods which reached phase Succeeded. terminating integer The number of pods which are terminating (in phase Pending or Running and have a deletionTimestamp). This field is beta-level. The job controller populates the field when the feature gate JobPodReplacementPolicy is enabled (enabled by default). uncountedTerminatedPods object UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't been accounted in Job status counters. 13.1.9. .status.conditions Description The latest available observations of an object's current state. When a Job fails, one of the conditions will have type "Failed" and status true. When a Job is suspended, one of the conditions will have type "Suspended" and status true; when the Job is resumed, the status of this condition will become false. When a Job is completed, one of the conditions will have type "Complete" and status true. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ Type array 13.1.10. .status.conditions[] Description JobCondition describes current state of a job. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of job condition, Complete or Failed. 13.1.11. .status.uncountedTerminatedPods Description UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't been accounted in Job status counters. Type object Property Type Description failed array (string) failed holds UIDs of failed Pods. succeeded array (string) succeeded holds UIDs of succeeded Pods. 13.2. API endpoints The following API endpoints are available: /apis/batch/v1/jobs GET : list or watch objects of kind Job /apis/batch/v1/watch/jobs GET : watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/jobs DELETE : delete collection of Job GET : list or watch objects of kind Job POST : create a Job /apis/batch/v1/watch/namespaces/{namespace}/jobs GET : watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/jobs/{name} DELETE : delete a Job GET : read the specified Job PATCH : partially update the specified Job PUT : replace the specified Job /apis/batch/v1/watch/namespaces/{namespace}/jobs/{name} GET : watch changes to an object of kind Job. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status GET : read status of the specified Job PATCH : partially update status of the specified Job PUT : replace status of the specified Job 13.2.1. /apis/batch/v1/jobs HTTP method GET Description list or watch objects of kind Job Table 13.1. HTTP responses HTTP code Reponse body 200 - OK JobList schema 401 - Unauthorized Empty 13.2.2. /apis/batch/v1/watch/jobs HTTP method GET Description watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. Table 13.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.3. /apis/batch/v1/namespaces/{namespace}/jobs HTTP method DELETE Description delete collection of Job Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Job Table 13.5. HTTP responses HTTP code Reponse body 200 - OK JobList schema 401 - Unauthorized Empty HTTP method POST Description create a Job Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body Job schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 202 - Accepted Job schema 401 - Unauthorized Empty 13.2.4. /apis/batch/v1/watch/namespaces/{namespace}/jobs HTTP method GET Description watch individual changes to a list of Job. deprecated: use the 'watch' parameter with a list operation instead. Table 13.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.5. /apis/batch/v1/namespaces/{namespace}/jobs/{name} Table 13.10. Global path parameters Parameter Type Description name string name of the Job HTTP method DELETE Description delete a Job Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Job Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Job schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Job Table 13.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.15. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Job Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.17. Body parameters Parameter Type Description body Job schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty 13.2.6. /apis/batch/v1/watch/namespaces/{namespace}/jobs/{name} Table 13.19. Global path parameters Parameter Type Description name string name of the Job HTTP method GET Description watch changes to an object of kind Job. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 13.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.7. /apis/batch/v1/namespaces/{namespace}/jobs/{name}/status Table 13.21. Global path parameters Parameter Type Description name string name of the Job HTTP method GET Description read status of the specified Job Table 13.22. HTTP responses HTTP code Reponse body 200 - OK Job schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Job Table 13.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.24. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Job Table 13.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.26. Body parameters Parameter Type Description body Job schema Table 13.27. HTTP responses HTTP code Reponse body 200 - OK Job schema 201 - Created Job schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/job-batch-v1 |
Chapter 4. Authentication with Microsoft Azure | Chapter 4. Authentication with Microsoft Azure To authenticate users with Microsoft Azure: Enable authentication with Microsoft Azure . Provision users from Microsoft Azure to the software catalog . 4.1. Enabling authentication with Microsoft Azure Red Hat Developer Hub includes a Microsoft Azure authentication provider that can authenticate users by using OAuth. Prerequisites You have the permission to register an application in Microsoft Azure. You added a custom Developer Hub application configuration , and have sufficient permissions to modify it. Procedure To allow Developer Hub to authenticate with Microsoft Azure, create an OAuth application in Microsoft Azure . In the Azure portal go to App registrations , create a New registration with the configuration: Name The application name in Azure, such as <My Developer Hub> . On the Home > App registrations > <My Developer Hub> > Manage > Authentication page, Add a platform , with the following configuration: Redirect URI Enter the backend authentication URI set in Developer Hub: https:// <my_developer_hub_url> /api/auth/microsoft/handler/frame Front-channel logout URL Leave blank. Implicit grant and hybrid flows Leave all checkboxes cleared. On the Home > App registrations > <My Developer Hub> > Manage > API permissions page, Add a Permission , then add the following Delegated permission for the Microsoft Graph API : email offline_access openid profile User.Read Optional custom scopes for the Microsoft Graph API that you define both in this section and in the Developer Hub configuration ( app-config-rhdh.yaml ). Note Your company might require you to grant admin consent for these permissions. Even if your company does not require admin consent, you might do so as it means users do not need to individually consent the first time they access backstage. To grant administrator consent, a directory administrator must go to the admin consent page and click Grant admin consent for COMPANY NAME . On the Home > App registrations > <My Developer Hub> > Manage > Certificates & Secrets page, in the Client secrets tab, create a New client secret . Save for the step: Directory (tenant) ID Application (client) ID Application (client) secret To add your Microsoft Azure credentials to Developer Hub, add the following key/value pairs to your Developer Hub secrets : AUTH_AZURE_TENANT_ID Enter your saved Directory (tenant) ID . AUTH_AZURE_CLIENT_ID Enter your saved Application (client) ID . AUTH_AZURE_CLIENT_SECRET Enter your saved Application (client) secret . Set up the Microsoft Azure authentication provider in your Developer Hub custom configuration, such as app-config-rhdh : app-config-rhdh.yaml fragment auth: environment: production providers: microsoft: production: clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} tenantId: USD{AUTH_AZURE_TENANT_ID} signInPage: microsoft environment: production Mark the environment as production to hide the Guest login in the Developer Hub home page. clientId , clientSecret and tenantId Use the Developer Hub application information that you have created in Microsoft Azure and configured in OpenShift as secrets. signInPage: microsoft Enable the Microsoft Azure provider as default sign-in provider. Optional: Consider adding following optional fields: dangerouslyAllowSignInWithoutUserInCatalog: true To enable authentication without requiring to provision users in the Developer Hub software catalog. Warning Use dangerouslyAllowSignInWithoutUserInCatalog to explore Developer Hub features, but do not use it in production. app-config-rhdh.yaml fragment with optional field to allow authenticating users absent from the software catalog auth: environment: production providers: microsoft: production: clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} tenantId: USD{AUTH_AZURE_TENANT_ID} signInPage: microsoft dangerouslyAllowSignInWithoutUserInCatalog: true domainHint Optional for single-tenant applications. You can reduce login friction for users with accounts in multiple tenants by automatically filtering out accounts from other tenants. If you want to use this parameter for a single-tenant application, uncomment and enter the tenant ID. If your application registration is multi-tenant, leave this parameter blank. For more information, see Home Realm Discovery . app-config-rhdh.yaml fragment with optional domainHint field auth: environment: production providers: microsoft: production: domainHint: USD{AUTH_AZURE_TENANT_ID} additionalScopes Optional for additional scopes. To add scopes for the application registration, uncomment and enter the list of scopes that you want to add. The default and mandatory value lists: 'openid', 'offline_access', 'profile', 'email', 'User.Read' . app-config-rhdh.yaml fragment with optional additionalScopes field auth: environment: production providers: microsoft: production: additionalScopes: - Mail.Send Note This step is optional for environments with outgoing access restrictions, such as firewall rules. If your environment has such restrictions, ensure that your RHDH backend can access the following hosts: login.microsoftonline.com : For obtaining and exchanging authorization codes and access tokens. graph.microsoft.com : For retrieving user profile information (as referenced in the source code). If this host is unreachable, you might see an Authentication failed, failed to fetch user profile error when attempting to log in. 4.2. Provisioning users from Microsoft Azure to the software catalog To authenticate users with Microsoft Azure, after Enabling authentication with Microsoft Azure , provision users from Microsoft Azure to the Developer Hub software catalog. Prerequisites You have enabled authentication with Microsoft Azure . Procedure To enable Microsoft Azure member discovery, edit your custom Developer Hub ConfigMap, such as app-config-rhdh , and add following lines to the app-config.yaml content: app-config.yaml fragment with mandatory microsoftGraphOrg fields dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: microsoftGraphOrg: providerId: target: https://graph.microsoft.com/v1.0 tenantId: USD{AUTH_AZURE_TENANT_ID} clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} dangerouslyAllowSignInWithoutUserInCatalog: false Allow authentication only for users in the Developer Hub software catalog. target: https://graph.microsoft.com/v1.0 Defines the MSGraph API endpoint the provider is connecting to. You might change this parameter to use a different version, such as the beta endpoint . tenandId , clientId and clientSecret Use the Developer Hub application information you created in Microsoft Azure and configured in OpenShift as secrets. Optional: Consider adding the following optional microsoftGraphOrg.providerId fields: authority: https://login.microsoftonline.com Defines the authority used. Change the value to use a different authority , such as Azure US government. Default value: https://login.microsoftonline.com . app-config.yaml fragment with optional queryMode field catalog: providers: microsoftGraphOrg: providerId: authority: https://login.microsoftonline.com/ queryMode: basic | advanced By default, the Microsoft Graph API only provides the basic feature set for querying. Certain features require advanced querying capabilities. See Microsoft Azure Advanced queries . app-config.yaml fragment with optional queryMode field catalog: providers: microsoftGraphOrg: providerId: queryMode: advanced user.expand To include the expanded resource or collection referenced by a single relationship (navigation property) in your results. Only one relationship can be expanded in a single request. See Microsoft Graph query expand parameter . This parameter can be combined with ] or xref:userFilter[ . app-config.yaml fragment with optional user.expand field catalog: providers: microsoftGraphOrg: providerId: user: expand: manager user.filter To filter users. See Microsoft Graph API and Microsoft Graph API query filter parameters syntax . This parameter and ???TITLE??? are mutually exclusive, only one can be specified. app-config.yaml fragment with optional user.filter field catalog: providers: microsoftGraphOrg: providerId: user: filter: accountEnabled eq true and userType eq 'member' user.loadPhotos: true | false Load photos by default. Set to false to not load user photos. app-config.yaml fragment with optional user.loadPhotos field catalog: providers: microsoftGraphOrg: providerId: user: loadPhotos: true user.select Define the Microsoft Graph resource types to retrieve. app-config.yaml fragment with optional user.select field catalog: providers: microsoftGraphOrg: providerId: user: select: ['id', 'displayName', 'description'] userGroupMember.filter To use group membership to get users. To filter groups and fetch their members. This parameter and ???TITLE??? are mutually exclusive, only one can be specified. app-config.yaml fragment with optional userGroupMember.filter field catalog: providers: microsoftGraphOrg: providerId: userGroupMember: filter: "displayName eq 'Backstage Users'" userGroupMember.search To use group membership to get users. To search for groups and fetch their members. This parameter and ???TITLE??? are mutually exclusive, only one can be specified. app-config.yaml fragment with optional userGroupMember.search field catalog: providers: microsoftGraphOrg: providerId: userGroupMember: search: '"description:One" AND ("displayName:Video" OR "displayName:Drive")' group.expand Optional parameter to include the expanded resource or collection referenced by a single relationship (navigation property) in your results. Only one relationship can be expanded in a single request. See https://docs.microsoft.com/en-us/graph/query-parameters#expand-parameter This parameter can be combined with ] instead of xref:userFilter[ . app-config.yaml fragment with optional group.expand field catalog: providers: microsoftGraphOrg: providerId: group: expand: member group.filter To filter groups. See Microsoft Graph API query group syntax . app-config.yaml fragment with optional group.filter field catalog: providers: microsoftGraphOrg: providerId: group: filter: securityEnabled eq false and mailEnabled eq true and groupTypes/any(c:c+eq+'Unified') group.search To search for groups. See Microsoft Graph API query search parameter . app-config.yaml fragment with optional group.search field catalog: providers: microsoftGraphOrg: providerId: group: search: '"description:One" AND ("displayName:Video" OR "displayName:Drive")' group.select To define the Microsoft Graph resource types to retrieve. app-config.yaml fragment with optional group.select field catalog: providers: microsoftGraphOrg: providerId: group: select: ['id', 'displayName', 'description'] schedule.frequency To specify custom schedule frequency. Supports cron, ISO duration, and "human duration" as used in code. app-config.yaml fragment with optional schedule.frequency field catalog: providers: microsoftGraphOrg: providerId: schedule: frequency: { hours: 1 } schedule.timeout To specify custom timeout. Supports ISO duration and "human duration" as used in code. app-config.yaml fragment with optional schedule.timeout field catalog: providers: microsoftGraphOrg: providerId: schedule: timeout: { minutes: 50 } schedule.initialDelay To specify custom initial delay. Supports ISO duration and "human duration" as used in code. app-config.yaml fragment with optional schedule.initialDelay field catalog: providers: microsoftGraphOrg: providerId: schedule: initialDelay: { seconds: 15} Verification Check the console logs to verify that the synchronization is completed. Successful synchronization example: backend:start: {"class":"MicrosoftGraphOrgEntityProviderUSD1","level":"info","message":"Read 1 msgraph users and 1 msgraph groups in 2.2 seconds. Committing...","plugin":"catalog","service":"backstage","taskId":"MicrosoftGraphOrgEntityProvider:default:refresh","taskInstanceId":"88a67ce1-c466-41a4-9760-825e16b946be","timestamp":"2024-06-26 12:23:42"} backend:start: {"class":"MicrosoftGraphOrgEntityProviderUSD1","level":"info","message":"Committed 1 msgraph users and 1 msgraph groups in 0.0 seconds.","plugin":"catalog","service":"backstage","taskId":"MicrosoftGraphOrgEntityProvider:default:refresh","taskInstanceId":"88a67ce1-c466-41a4-9760-825e16b946be","timestamp":"2024-06-26 12:23:42"} Log in with a Microsoft Azure account. | [
"auth: environment: production providers: microsoft: production: clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} tenantId: USD{AUTH_AZURE_TENANT_ID} signInPage: microsoft",
"auth: environment: production providers: microsoft: production: clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} tenantId: USD{AUTH_AZURE_TENANT_ID} signInPage: microsoft dangerouslyAllowSignInWithoutUserInCatalog: true",
"auth: environment: production providers: microsoft: production: domainHint: USD{AUTH_AZURE_TENANT_ID}",
"auth: environment: production providers: microsoft: production: additionalScopes: - Mail.Send",
"dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: microsoftGraphOrg: providerId: target: https://graph.microsoft.com/v1.0 tenantId: USD{AUTH_AZURE_TENANT_ID} clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET}",
"catalog: providers: microsoftGraphOrg: providerId: authority: https://login.microsoftonline.com/",
"catalog: providers: microsoftGraphOrg: providerId: queryMode: advanced",
"catalog: providers: microsoftGraphOrg: providerId: user: expand: manager",
"catalog: providers: microsoftGraphOrg: providerId: user: filter: accountEnabled eq true and userType eq 'member'",
"catalog: providers: microsoftGraphOrg: providerId: user: loadPhotos: true",
"catalog: providers: microsoftGraphOrg: providerId: user: select: ['id', 'displayName', 'description']",
"catalog: providers: microsoftGraphOrg: providerId: userGroupMember: filter: \"displayName eq 'Backstage Users'\"",
"catalog: providers: microsoftGraphOrg: providerId: userGroupMember: search: '\"description:One\" AND (\"displayName:Video\" OR \"displayName:Drive\")'",
"catalog: providers: microsoftGraphOrg: providerId: group: expand: member",
"catalog: providers: microsoftGraphOrg: providerId: group: filter: securityEnabled eq false and mailEnabled eq true and groupTypes/any(c:c+eq+'Unified')",
"catalog: providers: microsoftGraphOrg: providerId: group: search: '\"description:One\" AND (\"displayName:Video\" OR \"displayName:Drive\")'",
"catalog: providers: microsoftGraphOrg: providerId: group: select: ['id', 'displayName', 'description']",
"catalog: providers: microsoftGraphOrg: providerId: schedule: frequency: { hours: 1 }",
"catalog: providers: microsoftGraphOrg: providerId: schedule: timeout: { minutes: 50 }",
"catalog: providers: microsoftGraphOrg: providerId: schedule: initialDelay: { seconds: 15}",
"backend:start: {\"class\":\"MicrosoftGraphOrgEntityProviderUSD1\",\"level\":\"info\",\"message\":\"Read 1 msgraph users and 1 msgraph groups in 2.2 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"MicrosoftGraphOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"88a67ce1-c466-41a4-9760-825e16b946be\",\"timestamp\":\"2024-06-26 12:23:42\"} backend:start: {\"class\":\"MicrosoftGraphOrgEntityProviderUSD1\",\"level\":\"info\",\"message\":\"Committed 1 msgraph users and 1 msgraph groups in 0.0 seconds.\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"MicrosoftGraphOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"88a67ce1-c466-41a4-9760-825e16b946be\",\"timestamp\":\"2024-06-26 12:23:42\"}"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authentication/assembly-authenticating-with-microsoft-azure |
Operator APIs | Operator APIs OpenShift Container Platform 4.12 Reference guide for Operator APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/operator_apis/index |
Chapter 22. Using Cruise Control to modify topic replication factor | Chapter 22. Using Cruise Control to modify topic replication factor Change the replication factor of topics by updating the KafkaTopic resource managed by the Topic Operator. You can adjust the replication factor for specific purposes, such as: Setting a lower replication factor for non-critical topics or because of resource shortages Setting a higher replication factor to improve data durability and fault tolerance The Topic Operator uses Cruise Control to make the necessary changes, so Cruise Control must be deployed with Streams for Apache Kafka. The Topic Operator watches and periodically reconciles all managed and unpaused KafkaTopic resources to detect changes to .spec.replicas configuration by comparing the replication factor of the topic in Kafka. One or more replication factor updates are then sent to Cruise Control for processing in a single request. Progress is reflected in the status of the KafkaTopic resource. Prerequisites The Cluster Operator must be deployed. The Topic Operator must be deployed to manage topics through the KafkaTopic custom resource. Cruise Control is deployed with Kafka. Procedure Edit the KafkaTopic resource to change the replicas value. In this procedure, we change the replicas value for my-topic from 1 to 3. Kafka topic replication factor configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 # ... Apply the change to the KafkaTopic configuration and wait for the Topic Operator to update the topic. Check the status of the KafkaTopic resource to make sure the request was successful: oc get kafkatopics my-topic -o yaml Status for the replication factor change apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 # ... # ... status: conditions: - lastTransitionTime: "2024-01-18T16:13:50.490918232Z" status: "True" type: Ready observedGeneration: 2 replicasChange: sessionId: 1aa418ca-53ed-4b93-b0a4-58413c4fc0cb 1 state: ongoing 2 targetReplicas: 3 3 topicName: my-topic 1 The session ID for the Cruise Control operation, which is shown when process moves out of a pending state. 2 The state of the update. Moves from pending to ongoing , and then the entire replicasChange status is removed when the change is complete. 3 The requested change to the number of replicas. An error message is shown in the status if the request fails before completion. The request is periodically retried if it enters a failed state. Changing topic replication factor using the standalone Topic Operator If you are using the standalone Topic Operator and aim to change the topic replication factor through configuration, you still need to use the Topic Operator in unidirectional mode alongside a Cruise Control deployment. You also need to include the following environment variables in the standalone Topic Operator deployment so that it can integrate with Cruise Control. Example standalone Topic Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: # ... - name: STRIMZI_CRUISE_CONTROL_ENABLED 1 value: true - name: STRIMZI_CRUISE_CONTROL_RACK_ENABLED 2 value: false - name: STRIMZI_CRUISE_CONTROL_HOSTNAME 3 value: cruise-control-api.namespace.svc - name: STRIMZI_CRUISE_CONTROL_PORT 4 value: 9090 - name: STRIMZI_CRUISE_CONTROL_SSL_ENABLED 5 value: true - name: STRIMZI_CRUISE_CONTROL_AUTH_ENABLED 6 value: true 1 Integrates Cruise Control with the Topic Operator. 2 Flag to indicate whether rack awareness is enabled on the Kafka cluster. If so, replicas can be spread across different racks, data centers, or availability zones. 3 Cruise Control hostname. 4 Cruise control port. 5 Enables TLS authentication and encryption for accessing the Kafka cluster. 6 Enables basic authorization for accessing the Cruise Control API. If you enable TLS authentication and authorization, mount the required certificates as follows: Public certificates of the Cluster CA (certificate authority) in /etc/tls-sidecar/cluster-ca-certs/ca.crt Basic authorization credentials (user name and password) in /etc/eto-cc-api/topic-operator.apiAdminName and /etc/eto-cc-api/topic-operator.apiAdminPassword | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 #",
"get kafkatopics my-topic -o yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 # status: conditions: - lastTransitionTime: \"2024-01-18T16:13:50.490918232Z\" status: \"True\" type: Ready observedGeneration: 2 replicasChange: sessionId: 1aa418ca-53ed-4b93-b0a4-58413c4fc0cb 1 state: ongoing 2 targetReplicas: 3 3 topicName: my-topic",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: # - name: STRIMZI_CRUISE_CONTROL_ENABLED 1 value: true - name: STRIMZI_CRUISE_CONTROL_RACK_ENABLED 2 value: false - name: STRIMZI_CRUISE_CONTROL_HOSTNAME 3 value: cruise-control-api.namespace.svc - name: STRIMZI_CRUISE_CONTROL_PORT 4 value: 9090 - name: STRIMZI_CRUISE_CONTROL_SSL_ENABLED 5 value: true - name: STRIMZI_CRUISE_CONTROL_AUTH_ENABLED 6 value: true"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/proc-cruise-control-topic-replication-str |
Chapter 2. Adding trusted certificate authorities | Chapter 2. Adding trusted certificate authorities Learn how to add custom trusted certificate authorities to Red Hat Advanced Cluster Security for Kubernetes. If you are using an enterprise certificate authority (CA) on your network, or self-signed certificates, you must add the CA's root certificate to Red Hat Advanced Cluster Security for Kubernetes as a trusted root CA. Adding trusted root CAs allows: Central and Scanner to trust remote servers when you integrate with other tools. Sensor to trust custom certificates you use for Central. You can add additional CAs during the installation or on an existing deployment. Note You must first configure your trusted CAs in the cluster where you have deployed Central and then propagate the changes to Scanner and Sensor. 2.1. Configuring additional CAs To add custom CAs: Procedure Download the ca-setup.sh script. Note If you are doing a new installation, you can find the ca-setup.sh script in the scripts directory at central-bundle/central/scripts/ca-setup.sh . You must run the ca-setup.sh script in the same terminal from which you logged into your OpenShift Container Platform cluster. Make the ca-setup.sh script executable: USD chmod +x ca-setup.sh To add: A single certificate, use the -f (file) option: USD ./ca-setup.sh -f <certificate> Note You must use a PEM-encoded certificate file (with any extension). You can also use the -u (update) option along with the -f option to update any previously added certificate. Multiple certificates at once, move all certificates in a directory, and then use the -d (directory) option: USD ./ca-setup.sh -d <directory_name> Note You must use PEM-encoded certificate files with a .crt or .pem extension. Each file must only contain a single certificate. You can also use the -u (update) option along with the -d option to update any previously added certificates. 2.2. Propagating changes After you configure trusted CAs, you must make Red Hat Advanced Cluster Security for Kubernetes services trust them. If you have configured trusted CAs after the installation, you must restart Central. Additionally, if you are also adding certificates for integrating with image registries, you must restart both Central and Scanner. 2.2.1. Restarting the Central container You can restart the Central container by killing the Central container or by deleting the Central pod. Procedure Run the following command to kill the Central container: Note You must wait for at least 1 minute, until OpenShift Container Platform propagates your changes and restarts the Central container. USD oc -n stackrox exec deploy/central -c central -- kill 1 Or, run the following command to delete the Central pod: USD oc -n stackrox delete pod -lapp=central 2.2.2. Restarting the Scanner container You can restart the Scanner container by deleting the pod. Procedure Run the following command to delete the Scanner pod: On OpenShift Container Platform: USD oc delete pod -n stackrox -l app=scanner On Kubernetes: USD kubectl delete pod -n stackrox -l app=scanner Important After you have added trusted CAs and configured Central, the CAs are included in any new Sensor deployment bundles that you create. If an existing Sensor reports problems while connecting to Central, you must generate a Sensor deployment YAML file and update existing clusters. If you are deploying a new Sensor using the sensor.sh script, run the following command before you run the sensor.sh script: USD ./ca-setup-sensor.sh -d ./additional-cas/ If you are deploying a new Sensor using Helm, you do not have to run any additional scripts. | [
"chmod +x ca-setup.sh",
"./ca-setup.sh -f <certificate>",
"./ca-setup.sh -d <directory_name>",
"oc -n stackrox exec deploy/central -c central -- kill 1",
"oc -n stackrox delete pod -lapp=central",
"oc delete pod -n stackrox -l app=scanner",
"kubectl delete pod -n stackrox -l app=scanner",
"./ca-setup-sensor.sh -d ./additional-cas/"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/add-trusted-ca |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_console/preface |
Chapter 7. Data Roles | Chapter 7. Data Roles 7.1. Data Roles Data roles, also called entitlements, are sets of permissions defined per VDB that dictate data access (create, read, update, delete). Data roles use a fine-grained permission system that JBoss Data Virtualization will enforce at runtime and provide audit log entries for access violations. Refer to the Administration and Configuration Guide and Development Guide: Server Development for more information about Logging and Custom Logging. Prior to applying data roles, you should consider restricting source system access through the fundamental design of your VDB. Foremost, JBoss Data Virtualization can only access source entries that are represented in imported metadata. You should narrow imported metadata to only what is necessary for use by your VDB. When using Teiid Designer, you may then go further and modify the imported metadata at a granular level to remove specific columns or indicate tables that are not to be updated, etc. If data role validation is enabled and data roles are defined in a VDB, then access permissions will be enforced by the JBoss Data Virtualization Server. The use of data roles may be disabled system wide using the setting for the teiid subsystem policy-decider-module. Data roles also have built-in system functions (see Section 2.4.18, "Security Functions" ) that can be used for row-based and other authorization checks. The hasRole system function will return true if the current user has the given data role. The hasRole function can be used in procedure or view definitions to allow for a more dynamic application of security - which allows for things such as value masking or row level security. Note See the Security Guide for details on using an alternative authorization scheme. Warning Data roles are only checked if present in a VDB. A VDB deployed without data roles can be used by any authenticated user. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-data_roles |
39.4. Migrating over SSL | 39.4. Migrating over SSL To encrypt the data transmission between LDAP and IdM during a migration: Store the certificate of the CA, that issued the remote LDAP server's certificate, in a file on the IdM server. For example: /etc/ipa/remote.crt . Follow the steps described in Section 39.3, "Migrating an LDAP Server to Identity Management" . However for an encrypted LDAP connection during the migration, use the ldaps protocol in the URL and pass the --ca-cert-file option to the command. For example: | [
"ipa migrate-ds --ca-cert-file= /etc/ipa/remote.crt ldaps:// ldap.example.com :636"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/migrationg-ldap-ssl |
Chapter 5. Removed features | Chapter 5. Removed features This section describes features and functionality that have been removed from AMQ Broker. Access to the root of the AMQ Broker web server In versions of AMQ Broker, opening the root URL of the AMQ Broker web server, for example, http://localhost:8161/ , in a browser window displays a landing page. The landing page has links to AMQ Management Console and AMQ Broker documentation. In 7.11, all static HTML content is removed from AMQ Broker. Therefore, if you open the root URL of the AMQ Broker web server, a landing page is not displayed. Instead, your browser session is automatically redirected to AMQ Management Console. | null | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/release_notes_for_red_hat_amq_broker_7.11/removed_features |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/introduction_to_the_migration_toolkit_for_applications/making-open-source-more-inclusive |
Chapter 1. Authorization APIs | Chapter 1. Authorization APIs 1.1. LocalResourceAccessReview [authorization.openshift.io/v1] Description LocalResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. SubjectRulesReview [authorization.openshift.io/v1] Description SubjectRulesReview is a resource you can create to determine which actions another user can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. SelfSubjectReview [authentication.k8s.io/v1] Description SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated. If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase. Type object 1.8. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object 1.9. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object 1.10. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object 1.11. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object 1.12. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object 1.13. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/authorization-apis |
Quick Start Guide | Quick Start Guide Red Hat Gluster Storage 3.5 Getting Started with Web Administration Red Hat Gluster Storage Documentation Team Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/quick_start_guide/index |
5.4. Logical Volume Administration | 5.4. Logical Volume Administration This section describes the commands that perform the various aspects of logical volume administration. 5.4.1. Creating Linear Logical Volumes To create a logical volume, use the lvcreate command. If you do not specify a name for the logical volume, the default name lvol # is used where # is the internal number of the logical volume. When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a -free basis. Modifying the logical volume frees and reallocates space in the physical volumes. As of the Red Hat Enterprise Linux 6.3 release, you can use LVM to create, display, rename, use, and remove RAID logical volumes. For information on RAID logical volumes, see Section 5.4.16, "RAID Logical Volumes" . The following command creates a logical volume 10 gigabytes in size in the volume group vg1 . The default unit for logical volume size is megabytes. The following command creates a 1500 MB linear logical volume named testlv in the volume group testvg , creating the block device /dev/testvg/testlv . The following command creates a 50 gigabyte logical volume named gfslv from the free extents in volume group vg0 . You can use the -l argument of the lvcreate command to specify the size of the logical volume in extents. You can also use this argument to specify the percentage of the volume group to use for the logical volume. The following command creates a logical volume called mylv that uses 60% of the total space in volume group testvg . You can also use the -l argument of the lvcreate command to specify the percentage of the remaining free space in a volume group as the size of the logical volume. The following command creates a logical volume called yourlv that uses all of the unallocated space in the volume group testvg . You can use -l argument of the lvcreate command to create a logical volume that uses the entire volume group. Another way to create a logical volume that uses the entire volume group is to use the vgdisplay command to find the "Total PE" size and to use those results as input to the lvcreate command. The following commands create a logical volume called mylv that fills the volume group named testvg . The underlying physical volumes used to create a logical volume can be important if the physical volume needs to be removed, so you may need to consider this possibility when you create the logical volume. For information on removing a physical volume from a volume group, see Section 5.3.7, "Removing Physical Volumes from a Volume Group" . To create a logical volume to be allocated from a specific physical volume in the volume group, specify the physical volume or volumes at the end at the lvcreate command line. The following command creates a logical volume named testlv in volume group testvg allocated from the physical volume /dev/sdg1 , You can specify which extents of a physical volume are to be used for a logical volume. The following example creates a linear logical volume out of extents 0 through 24 of physical volume /dev/sda1 and extents 50 through 124 of physical volume /dev/sdb1 in volume group testvg . The following example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and then continues laying out the logical volume at extent 100. The default policy for how the extents of a logical volume are allocated is inherit , which applies the same policy as for the volume group. These policies can be changed using the lvchange command. For information on allocation policies, see Section 5.3.1, "Creating Volume Groups" . | [
"lvcreate -L 10G vg1",
"lvcreate -L 1500 -n testlv testvg",
"lvcreate -L 50G -n gfslv vg0",
"lvcreate -l 60%VG -n mylv testvg",
"lvcreate -l 100%FREE -n yourlv testvg",
"vgdisplay testvg | grep \"Total PE\" Total PE 10230 lvcreate -l 10230 testvg -n mylv",
"lvcreate -L 1500 -ntestlv testvg /dev/sdg1",
"lvcreate -l 100 -n testlv testvg /dev/sda1:0-24 /dev/sdb1:50-124",
"lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv |
Chapter 5. Upgrading the Red Hat build of Keycloak Admin Client | Chapter 5. Upgrading the Red Hat build of Keycloak Admin Client Be sure that you upgrade the Red Hat build of Keycloak server before you upgrade the admin-client. Earlier versions of the admin-client might work with later versions of Red Hat build of Keycloak server, but earlier versions of Red Hat build of Keycloak server might not work with later versions of the admin-client. Therefore, use the admin-client version that matches the current Red Hat build of Keycloak server version. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/upgrading_guide/upgrading_the_red_hat_build_of_keycloak_admin_client |
Chapter 8. ConsoleSample [console.openshift.io/v1] | Chapter 8. ConsoleSample [console.openshift.io/v1] Description ConsoleSample is an extension to customizing OpenShift web console by adding samples. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec contains configuration for a console sample. 8.1.1. .spec Description spec contains configuration for a console sample. Type object Required abstract description source title Property Type Description abstract string abstract is a short introduction to the sample. It is required and must be no more than 100 characters in length. The abstract is shown on the sample card tile below the title and provider and is limited to three lines of content. description string description is a long form explanation of the sample. It is required and can have a maximum length of 4096 characters. It is a README.md-like content for additional information, links, pre-conditions, and other instructions. It will be rendered as Markdown so that it can contain line breaks, links, and other simple formatting. icon string icon is an optional base64 encoded image and shown beside the sample title. The format must follow the data: URL format and can have a maximum size of 10 KB . data:[<mediatype>][;base64],<base64 encoded image> For example: data:image;base64, plus the base64 encoded image. Vector images can also be used. SVG icons must start with: data:image/svg+xml;base64, plus the base64 encoded SVG image. All sample catalog icons will be shown on a white background (also when the dark theme is used). The web console ensures that different aspect ratios work correctly. Currently, the surface of the icon is at most 40x100px. For more information on the data URL format, please visit https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs . provider string provider is an optional label to honor who provides the sample. It is optional and must be no more than 50 characters in length. A provider can be a company like "Red Hat" or an organization like "CNCF" or "Knative". Currently, the provider is only shown on the sample card tile below the title with the prefix "Provided by " source object source defines where to deploy the sample service from. The sample may be sourced from an external git repository or container image. tags array (string) tags are optional string values that can be used to find samples in the samples catalog. Examples of common tags may be "Java", "Quarkus", etc. They will be displayed on the samples details page. title string title is the display name of the sample. It is required and must be no more than 50 characters in length. type string type is an optional label to group multiple samples. It is optional and must be no more than 20 characters in length. Recommendation is a singular term like "Builder Image", "Devfile" or "Serverless Function". Currently, the type is shown a badge on the sample card tile in the top right corner. 8.1.2. .spec.source Description source defines where to deploy the sample service from. The sample may be sourced from an external git repository or container image. Type object Required type Property Type Description containerImport object containerImport allows the user import a container image. gitImport object gitImport allows the user to import code from a git repository. type string type of the sample, currently supported: "GitImport";"ContainerImport" 8.1.3. .spec.source.containerImport Description containerImport allows the user import a container image. Type object Required image Property Type Description image string reference to a container image that provides a HTTP service. The service must be exposed on the default port (8080) unless otherwise configured with the port field. Supported formats: - <repository-name>/<image-name> - docker.io/<repository-name>/<image-name> - quay.io/<repository-name>/<image-name> - quay.io/<repository-name>/<image-name>@sha256:<image hash> - quay.io/<repository-name>/<image-name>:<tag> service object service contains configuration for the Service resource created for this sample. 8.1.4. .spec.source.containerImport.service Description service contains configuration for the Service resource created for this sample. Type object Property Type Description targetPort integer targetPort is the port that the service listens on for HTTP requests. This port will be used for Service and Route created for this sample. Port must be in the range 1 to 65535. Default port is 8080. 8.1.5. .spec.source.gitImport Description gitImport allows the user to import code from a git repository. Type object Required repository Property Type Description repository object repository contains the reference to the actual Git repository. service object service contains configuration for the Service resource created for this sample. 8.1.6. .spec.source.gitImport.repository Description repository contains the reference to the actual Git repository. Type object Required url Property Type Description contextDir string contextDir is used to specify a directory within the repository to build the component. Must start with / and have a maximum length of 256 characters. When omitted, the default value is to build from the root of the repository. revision string revision is the git revision at which to clone the git repository Can be used to clone a specific branch, tag or commit SHA. Must be at most 256 characters in length. When omitted the repository's default branch is used. url string url of the Git repository that contains a HTTP service. The HTTP service must be exposed on the default port (8080) unless otherwise configured with the port field. Only public repositories on GitHub, GitLab and Bitbucket are currently supported: - https://github.com/<org>/<repository> - https://gitlab.com/<org>/<repository> - https://bitbucket.org/<org>/<repository> The url must have a maximum length of 256 characters. 8.1.7. .spec.source.gitImport.service Description service contains configuration for the Service resource created for this sample. Type object Property Type Description targetPort integer targetPort is the port that the service listens on for HTTP requests. This port will be used for Service created for this sample. Port must be in the range 1 to 65535. Default port is 8080. 8.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolesamples DELETE : delete collection of ConsoleSample GET : list objects of kind ConsoleSample POST : create a ConsoleSample /apis/console.openshift.io/v1/consolesamples/{name} DELETE : delete a ConsoleSample GET : read the specified ConsoleSample PATCH : partially update the specified ConsoleSample PUT : replace the specified ConsoleSample 8.2.1. /apis/console.openshift.io/v1/consolesamples HTTP method DELETE Description delete collection of ConsoleSample Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleSample Table 8.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleSampleList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleSample Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body ConsoleSample schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 201 - Created ConsoleSample schema 202 - Accepted ConsoleSample schema 401 - Unauthorized Empty 8.2.2. /apis/console.openshift.io/v1/consolesamples/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the ConsoleSample HTTP method DELETE Description delete a ConsoleSample Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleSample Table 8.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleSample Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleSample Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body ConsoleSample schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 201 - Created ConsoleSample schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/console_apis/consolesample-console-openshift-io-v1 |
Chapter 24. Enabling accessibility for visually impaired users | Chapter 24. Enabling accessibility for visually impaired users As a system administrator, you can configure the desktop environment to support users with a visual impairment. To enable accessibility, perform the following procedures. 24.1. Components that provide accessibility features On the RHEL 8 desktop, the Orca screen reader ensures accessibility for users with a visual impairment. Orca is included in the default RHEL installation. Orca reads information from the screen and communicates it to you using the following components: Speech Dispatcher Orca uses Speech Dispatcher to communicate with the speech synthesizer. Speech Dispatcher supports various speech synthesis backends, ensures that messages from other applications do not to interrupt the messages from Orca, and provides other functionality. Speech synthesizer Provides a speech output. The default speech synthesizer is eSpeak-NG . Braille display Provides a tactile output. The BRLTTY service enables this functionality. Additional resources Orca help page 24.2. Enabling the Universal Access menu You can enable the Universal Access Menu icon in the top panel, which provides a menu with several accessibility options. Procedure Open the Settings application. Select Universal Access . Enable the Always Show Universal Access Menu item. Enabling the Universal Access menu in Settings Verification Check that the Universal Access Menu icon is displayed on the top bar even when all options from this menu are switched off. 24.3. Enabling the screen reader You can enable the Orca screen reader in your desktop environment. The screen reader then reads the text displayed on the screen to improve accessibility. Procedure Enable the screen reader using either of the following ways: Press the Super + Alt + S keyboard shortcut. If the top panel shows the Universal Access menu, select Screen Reader in the menu. Verification Open an application with text content. Check that the screen reader reads the text in the application. 24.4. Enabling a Braille display device The Braille display is a device that uses the brltty service to provide tactile output for visually impaired users. In order for the Braille display to work correctly, perform the following procedures. 24.4.1. Supported types of Braille display device The following types of Braille display devices are supported on RHEL 8. Table 24.1. Braille display device types and the corresponding syntax Braille device type Syntax of the type Note Serial device serial:path Relative paths are at /dev . USB device [serial-number] The brackets ( [] ) here indicate optionality. Bluetooth device bluetooth:address 24.4.2. Enabling the brltty service To enable the Braille display, enable the brltty service to start automatically on boot. By default, brltty is disabled. Prerequisites Ensure that the brltty package is installed: Optionally, you can install speech synthesis support for brltty : Procedure Enable the brltty service to start on boot: Verification Reboot the system. Check that the brltty service is running: 24.4.3. Authorizing users of a Braille display device To use a Braille display device, you must set the users who are authorized to use the Braille display device. Procedure In the /etc/brltty.conf file, ensure that keyfile is set to /etc/brlapi.key : This is the default value. Your organization might have overridden it. Authorize the selected users by adding them to the brlapi group: Additional resources Editing user groups using the command line 24.4.4. Setting the driver for a Braille display device The brltty service automatically chooses a driver for your Braille display device. If the automatic detection fails or takes too long, you can set the driver manually. Prerequisites The automatic driver detection has failed or takes too long. Procedure Open the /etc/brltty.conf configuration file. Find the braille-driver directive, which specifies the driver for your Braille display device. Specify the identification code of the required driver in the braille-driver directive. Choose the identification code of required driver from the list provided in /etc/brltty.conf . For example, to use the XWindow driver: To set multiple drivers, list them separated by commas. Automatic detection then chooses from the listed drivers. 24.4.5. Connecting a Braille display device The brltty service automatically connects to your Braille display device. If the automatic detection fails, you can set the connection method manually. Prerequisites The Braille display device is physically connected to your system. The automatic connection has failed. Procedure If the device is connected by a serial-to-USB adapter, find the actual device name in the kernel messages on the device plug: Open the /etc/brltty.conf configuration file. Find the braille-device directive. In the braille-device directive, specify the connection. You can also set multiple devices, separated by commas, and each of them will be probed in turn. For example: Example 24.1. Settings for the first serial device Example 24.2. Settings for the first USB device matching Braille driver Example 24.3. Settings for a specific USB device by serial number Example 24.4. Settings for a serial-to-USB adapter Use the device name that you found earlier in the kernel messages: Note Setting braille-device to usb: does not work for a serial-to-USB adapter. Example 24.5. Settings for a specific Bluetooth device by address 24.4.6. Setting the text table The brltty service automatically selects a text table based on your system language. If your system language does not match the language of a document that you want to read, you can set the text table manually. Procedure Edit the /etc/brltty.conf file. Identify the code of your selected text table. You can find all available text tables in the /etc/brltty/Text/ directory. The code is the file name of the text table without its file suffix. Specify the code of the selected text table in the text-table directive. For example, to use the text table for American English: 24.4.7. Setting the contraction table You can select which table is used to encode the abbreviations with a Braille display device. Relative paths to particular contraction tables are stored within the /etc/brltty/Contraction/ directory. Warning If no table is specified, the brltty service does not use a contraction table. Procedure Choose a contraction table from the list in the /etc/brltty.conf file. For example, to use the contraction table for American English, grade 2: | [
"yum install brltty",
"yum install brltty-espeak-ng",
"systemctl enable --now brltty",
"systemctl status brltty ● brltty.service - Braille display driver for Linux/Unix Loaded: loaded (/usr/lib/systemd/system/brltty.service; enabled; vendor pres> Active: active (running) since Tue 2019-09-10 14:13:02 CEST; 39s ago Process: 905 ExecStart=/usr/bin/brltty (code=exited, status=0/SUCCESS) Main PID: 914 (brltty) Tasks: 3 (limit: 11360) Memory: 4.6M CGroup: /system.slice/brltty.service └─914 /usr/bin/brltty",
"api-parameters Auth=keyfile:/etc/brlapi.key",
"usermod --append -G brlapi user-name",
"XWindow braille-driver xw",
"journalctl --dmesg | fgrep ttyUSB",
"braille-device serial:ttyS0",
"braille-device usb:",
"braille-device usb:nnnnn",
"braille-device serial:ttyUSB0",
"braille-device bluetooth:xx:xx:xx:xx:xx:xx",
"text-table en_US # English (United States)",
"contraction-table en-us-g2 # English (US, grade 2)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/enabling-accessibility-for-visually-impaired-users_using-the-desktop-environment-in-rhel-8 |
2. Working with ISO Images | 2. Working with ISO Images This section will explain how to extract an ISO image provided by Red Hat, and how to create a new boot image containing changes you made following other procedures in this book. 2.1. Extracting Red Hat Enterprise Linux Boot Images Before you start customizing the installer, you must download Red Hat-provided boot images. These images will be required to perform all procedures described in this book. You can obtain Red Hat Enterprise Linux 7 boot media from the Red Hat Customer Portal after logging in to your account. Your account must have sufficient entitlements to download Red Hat Enterprise Linux 7 images. Download either the Binary DVD or Boot ISO image from the Customer Portal. Either of these can be modified using procedures in this guide; other available downloads, such as the KVM Guest Image or Supplementary DVD can not. The variant of the image (such as Server or ComputeNode ) does not matter in this case; any variant can be used. For detailed download instructions and description of the Binary DVD and Boot ISO downloads, see the Red Hat Enterprise Linux 7 Installation Guide . After your chosen iso image finishes downloading, follow the procedure below to extract its contents in order to prepare for their modification. Procedure 1. Extracting ISO Images Mount the downloaded image. Replace path/to/image.iso with the path to the downloaded ISO. Also make sure that the target directory ( /mnt/iso ) exists and nothing else is currently mounted there. Create a working directory - a directory where you want to place the contents of the ISO image. Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership. Unmount the image. After you finish unpacking, the ISO image is extracted in your /tmp/ISO where you can modify its contents. Continue with Section 3, "Customizing the Boot Menu" or Section 5, "Developing Installer Add-ons" . Once you finish making changes, create a new, modified ISO image using the instructions in Section 2.3, "Creating Custom Boot Images" . 2.2. Creating a product.img File A product.img image file is an archive containing files which replace existing files or add new ones in the installer runtime. During boot, Anaconda loads this file from the images/ directory on the boot media. Then, it uses files present inside this file to replace identically named files in the installer's file system; this is necessary to customize the installer (for example, for replacing default images with custom ones). The product.img image must contain a directory structure identical to the installer. Specifically, two topics discussed in this guide require you to create a product image. The table below lists the correct locations inside the image file directory structure: Table 1. Locations of Add-ons and Anaconda Visuals Type of custom content File system location Pixmaps (logo, side bar, top bar, etc.) /usr/share/anaconda/pixmaps/ Banners for the installation progress screen /usr/share/anaconda/pixmaps/rnotes/en/ GUI stylesheet /usr/share/anaconda/anaconda-gtk.css Installclasses (for changing the product name) /run/install/product/pyanaconda/installclasses/ Anaconda add-ons /usr/share/anaconda/addons/ The procedure below explains how to create a valid product.img file. Procedure 2. Creating product.img Navigate to a working directory such as /tmp , and create a subdirectory named product/ : Create a directory structure which is identical to the location of the file you want to replace. For example, if you want to test an add-on, which belongs in the /usr/share/anaconda/addons directory on the installation system; create the same structure in your working directory: Note You can browse the installer's runtime file system by booting the installation, switching to virtual console 1 ( Ctrl + Alt + F1 ) and then switching to the second tmux window ( Ctrl + b 2 ). This opens a shell prompt which you can use to browse the file system. Place your customized files (in this example, custom add-on for Anaconda ) into the newly created directory: Repeat the two steps above (create a directory structure and move modified files into it) for every file you want to add to the installer. Create a .buildstamp file in the root of the directory which will become the product.img file. The .buildstamp file describes the system version and several other parameters. The following is an example of a .buildstamp file from Red Hat Enterprise Linux 7.4: Note the IsFinal parameter, which specifies whether the image is for a release (GA) version of the product ( True ), or a pre-release such as Alpha, Beta, or an internal milestone ( False ). Change into the product/ directory, and create the product.img archive: This creates a product.img file one level above the product/ directory. Move the product.img file to the images/ directory of the extracted ISO image. After finishing this procedure, your customizations are placed in the correct directory. You can continue with Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO image with your changes included. The product.img file will be automatically loaded when starting the installer. Note Instead of adding the product.img file on the boot media, you can place this file into a different location and use the inst.updates= boot option at the boot menu to load it. In that case, the image file can have any name, and it can be placed in any location (USB flash drive, hard disk, HTTP, FTP or NFS server), as long as this location is reachable from the installation system. See the Red Hat Enterprise Linux 7 Installation Guide for more information about Anaconda boot options. 2.3. Creating Custom Boot Images When you finish customizing boot images provided by Red Hat, you must create a new image which includes changes you made. To do this, follow the procedure below. Procedure 3. Creating ISO Images Make sure that all of your changes are included in the working directory. For example, if you are testing an add-on, make sure to place the product.img in the images/ directory. Make sure your current working directory is the top-level directory of the extracted ISO image - e.g. /tmp/ISO/iso . Create the new ISO image using genisoimage : In the above example: Make sure that values for the -V , -volset , and -A options match the image's boot loader configuration, if you are using the LABEL= directive for options which require a location to load a file on the same disk. If your boot loader configuration ( isolinux/isolinux.cfg for BIOS and EFI/BOOT/grub.cfg for UEFI) uses the inst.stage2=LABEL= disk_label stanza to load the second stage of the installer from the same disk, then the disk labels must match. Important In boot loader configuration files, replace all spaces in disk labels with \x20 . For example, if you create an ISO image with a label of RHEL 7.1 , boot loader configuration should use RHEL\x207.1 to refer to this label. Replace the value of the -o option ( -o ../NEWISO.iso ) with the file name of your new image. The value in the example will create file NEWISO.iso in the directory above the current one. For more information about this command, see the genisoimage(1) man page. Implant an MD5 checksum into the image. Without performing this step, image verification check (the rd.live.check option in the boot loader configuration) will fail and you will not be able to continue with the installation. In the above example, replace ../NEWISO.iso with the file name and location of the ISO image you have created in the step. After finishing this procedure, you can write the new ISO image to physical media or a network server to boot it on physical hardware, or you can use it to start installing a virtual machine. See the Red Hat Enterprise Linux 7 Installation Guide for instructions on preparing boot media or network server, and the Red Hat Enterprise Linux 7 Virtualization Getting Started Guide for instructions on creating virtual machines with ISO images. | [
"mount -t iso9660 -o loop path/to/image.iso /mnt/iso",
"mkdir /tmp/ISO",
"cp -pRf /mnt/iso /tmp/ISO",
"umount /mnt/iso",
"cd /tmp",
"mkdir product/",
"mkdir -p product/usr/share/anaconda/addons",
"cp -r ~/path/to/custom/addon/ product/usr/share/anaconda/addons/",
"[Main] Product=Red Hat Enterprise Linux Version=7.4 BugURL=https://bugzilla.redhat.com/ IsFinal=True UUID=201707110057.x86_64 [Compose] Lorax=19.6.92-1",
"cd product",
"find . | cpio -c -o | gzip -9cv > ../product.img",
"genisoimage -U -r -v -T -J -joliet-long -V \" RHEL-7.1 Server.x86_64 \" -volset \" RHEL-7.1 Server.x86_64 \" -A \" RHEL-7.1 Server.x86_64 \" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .",
"implantisomd5 ../NEWISO.iso"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/anaconda_customization_guide/sect-iso-images |
Chapter 5. Quarkus CXF extensions reference | Chapter 5. Quarkus CXF extensions reference This chapter provides reference information about Quarkus CXF extensions. 5.1. Quarkus CXF Core capabilities for implementing SOAP clients and JAX-WS services. 5.1.1. Maven coordinates Create a new project using quarkus-cxf on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf</artifactId> </dependency> 5.1.2. Supported standards JAX-WS JAXB WS-Addressing WS-Policy MTOM 5.1.3. Usage There are several chapters in the User guide covering the usage of this extension: Your first SOAP Web service Your first SOAP Client Configuration Package for JVM and native Logging SSL Authentication and authorization Advanced SOAP client topics Running behind a reverse proxy Generate Java from WSDL Generate WSDL from Java Contract first and code first CXF Interceptors and Features JAX-WS Handlers JAX-WS Providers Examples Common problems and troubleshooting 5.1.4. Configuration Configuration property fixed at build time. All other configuration properties are overridable at runtime. Configuration property Type Default quarkus.cxf.codegen.wsdl2java.enabled boolean true If true wsdl2java code generation is run whenever there are WSDL resources found on default or custom defined locations; otherwise wsdl2java is not executed. Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_ENABLED Since Quarkus CXF : 2.0.0 quarkus.cxf.codegen.wsdl2java.includes List of string A comma separated list of glob patterns for selecting WSDL files which should be processed with wsdl2java tool. The paths are relative to src/main/resources or src/test/resources directories of the current Maven or Gradle module. The glob syntax is specified in io.quarkus.util.GlobUtil . Examples: calculator.wsdl,fruits.wsdl will match src/main/resources/calculator.wsdl and src/main/resources/fruits.wsdl under the current Maven or Gradle module, but will not match anything like src/main/resources/subdir/calculator.wsdl my-*-service.wsdl will match src/main/resources/my-foo-service.wsdl and src/main/resources/my-bar-service.wsdl **.wsdl will match any of the above There is a separate wsdl2java execution for each of the matching WSDL files. If you need different additional-params for each WSDL file, you may want to define a separate named parameter set for each one of them. Here is an example: # Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl # Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts File extensions File extensions other than .wsdl will work during normal builds, but changes in the matching files may get overseen in Quarkus dev mode. We recommend that you always use the .wsdl extension. There is no default value for this option, so wsdl2java code generation is disabled by default. Specifying quarkus.cxf.codegen.wsdl2java.my-name.excludes without setting any includes will cause a build time error. Make sure that the file sets selected by quarkus.cxf.codegen.wsdl2java.includes and quarkus.cxf.codegen.wsdl2java.[whatever-name].includes do not overlap. Otherwise a build time exception will be thrown. The files from src/main/resources selected by includes and excludes are automatically included in native image and therefore you do not need to include them via quarkus.cxf.wsdl-path (deprecated) or quarkus.native.resources.includes/excludes . Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_INCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.codegen.wsdl2java.excludes List of string A comma separated list of path patterns for selecting WSDL files which should not be processed with wsdl2java tool. The paths are relative to src/main/resources or src/test/resources directories of the current Maven or Gradle module. Same syntax as includes . Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_EXCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.codegen.wsdl2java.output-directory string A directory into which the generated files will be written, either absolute or relative to the current Maven or Gradle module directory. The default value is build tool dependent: for Maven, it is typically target/generated-sources/wsdl2java , while for Gradle it is build/classes/java/quarkus-generated-sources/wsdl2java . Quarkus tooling is only able to set up the default value as a source folder for the given build tool. If you set this to a custom path it is up to you to make sure that your build tool recognizes the path a as source folder. Also, if you choose a path outside target directory for Maven or outside build directory for Gradle, you need to take care for cleaning stale resources generated by builds. E.g. if you change the value of package-names option from org.foo to org.bar you need to take care for the removal of the removal of the old package org.foo . This will be passed as option -d to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_OUTPUT_DIRECTORY Since Quarkus CXF : 2.6.0 quarkus.cxf.codegen.wsdl2java.package-names List of string A comma separated list of tokens; each token can be one of the following: A Java package under which the Java source files should be generated A string of the form namespaceURI=packageName - in this case the entities coming from the given namespace URI will be generated under the given Java package. This will be passed as option -p to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_PACKAGE_NAMES Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.exclude-namespace-uris List of string A comma separated list of WSDL schema namespace URIs to ignore when generating Java code. This will be passed as option -nexclude to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_EXCLUDE_NAMESPACE_URIS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.service-name string The WSDL service name to use for the generated code. This will be passed as option -sn to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_SERVICE_NAME Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.bindings List of string A list of paths pointing at JAXWS or JAXB binding files or XMLBeans context files. The path to be either absolute or relative to the current Maven or Gradle module. This will be passed as option -b to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_BINDINGS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.validate boolean false If true , WSDLs are validated before processing; otherwise the WSDLs are not validated. This will be passed as option -validate to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_VALIDATE Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.wsdl-location string Specifies the value of the @WebServiceClient annotation's wsdlLocation property. This will be passed as option -wsdlLocation to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_WSDL_LOCATION Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.xjc List of string A comma separated list of XJC extensions to enable. The following extensions are available through io.quarkiverse.cxf:quarkus-cxf-xjc-plugins dependency: bg - generate getX() methods for boolean fields instead of isX() bgi - generate both isX() and getX() methods for boolean fields dv - initialize fields mapped from elements/attributes with their default values javadoc - generates JavaDoc based on xsd:documentation property-listener - add a property listener and the code for triggering the property change events to setter methods ts - generate toString() methods wsdlextension - generate WSDL extension methods in root classes These values correspond to -wsdl2java options -xjc-Xbg , -xjc-Xbgi , -xjc-Xdv , -xjc-Xjavadoc , -xjc-Xproperty-listener , -xjc-Xts and -xjc-Xwsdlextension respectively. Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_XJC Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.exception-super string java.lang.Exception A fully qualified class name to use as a superclass for fault beans generated from wsdl:fault elements This will be passed as option -exceptionSuper to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_EXCEPTION_SUPER Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.async-methods List of string A comma separated list of SEI methods for which asynchronous sibling methods should be generated; similar to enableAsyncMapping in a JAX-WS binding file This will be passed as option -asyncMethods to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_ASYNC_METHODS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.bare-methods List of string A comma separated list of SEI methods for which wrapper style sibling methods should be generated; similar to enableWrapperStyle in JAX-WS binding file This will be passed as option -bareMethods to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_BARE_METHODS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.mime-methods List of string A comma separated list of SEI methods for which mime:content mapping should be enabled; similar to enableMIMEContent in JAX-WS binding file This will be passed as option -mimeMethods to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_MIME_METHODS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java.additional-params List of string A comma separated list of additional command line parameters that should be passed to CXF wsdl2java tool along with the files selected by includes and excludes . Example: -keep,-dex,false . Check wsdl2java documentation for all supported options. Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA_ADDITIONAL_PARAMS Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws.enabled boolean true If true java2ws WSDL generation is run whenever there are Java classes selected via includes and excludes options; otherwise java2ws is not executed. Environment variable : QUARKUS_CXF_JAVA2WS_ENABLED Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws.includes List of string A comma separated list of glob patterns for selecting class names which should be processed with java2ws tool. The glob syntax is specified in io.quarkus.util.GlobUtil . The patterns are matched against fully qualified class names, such as org.acme.MyClass . The universe of class names to which includes and excludes are applied is defined as follows: 1. Only classes visible in Jandex are considered. 2. From those, only the ones annotated with @WebService are selected. Examples: Let's say that the application contains two classes annotated with @WebService and that both are visible in Jandex. Their names are org.foo.FruitWebService and org.bar.HelloWebService . Then quarkus.cxf.java2ws.includes = **.*WebService will match both class names quarkus.cxf.java2ws.includes = org.foo.* will match only org.foo.FruitWebService There is a separate java2ws execution for each of the matching class names. If you need different additional-params for each class, you may want to define a separate named parameter set for each one of them. Here is an example: # Parameters for the foo package quarkus.cxf.java2ws.foo-params.includes = org.foo.* quarkus.cxf.java2ws.foo-params.additional-params = -servicename,FruitService # Parameters for the bar package quarkus.cxf.java2ws.bar-params.includes = org.bar.* quarkus.cxf.java2ws.bar-params.additional-params = -servicename,HelloService There is no default value for this option, so java2ws WSDL generation is effectively disabled by default. Specifying quarkus.cxf.java2ws.excludes without setting any includes will cause a build time error. Make sure that the class names selected by quarkus.cxf.java2ws.includes and quarkus.cxf.java2ws.[whatever-name].includes do not overlap. Otherwise a build time exception will be thrown. If you would like to include the generated WSDL files in native image, you need to add them yourself using quarkus.native.resources.includes/excludes . Environment variable : QUARKUS_CXF_JAVA2WS_INCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws.excludes List of string A comma separated list of glob patterns for selecting java class names which should not be processed with java2ws tool. Same syntax as includes . Environment variable : QUARKUS_CXF_JAVA2WS_EXCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws.additional-params List of string A comma separated list of additional command line parameters that should be passed to CXF java2ws tool along with the files selected by includes and excludes . Example: -portname,12345 . Check java2ws documentation for all supported options. Supported options Currently, only options related to generation of WSDL from Java are supported. Environment variable : QUARKUS_CXF_JAVA2WS_ADDITIONAL_PARAMS Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws.wsdl-name-template string %CLASSES_DIR%/wsdl/%SIMPLE_CLASS_NAME%.wsdl A template for the names of generated WSDL files. There are 4 place holders, which can be used in the template: %SIMPLE_CLASS_NAME% - the simple class name of the Java class from which we are generating %FULLY_QUALIFIED_CLASS_NAME% - the fully qualified name from which we are generating with all dots are replaced replaced by underscores %TARGET_DIR% - the target directory of the current module of the current build tool; typically target for Maven and build for Gradle. %CLASSES_DIR% - the compiler output directory of the current module of the current build tool; typically target/classes for Maven and build/classes for Gradle. Environment variable : QUARKUS_CXF_JAVA2WS_WSDL_NAME_TEMPLATE Since Quarkus CXF : 2.0.0 quarkus.cxf.path string /services The default path for CXF resources. Earlier versions The default value before Quarkus CXF version 2.0.0 was / . Environment variable : QUARKUS_CXF_PATH Since Quarkus CXF : 1.0.0 quarkus.cxf.min-chunk-size int 128 The size in bytes of the chunks of memory allocated when writing data. This is a very advanced setting that should only be set if you understand exactly how it affects the output IO operations of the application. Environment variable : QUARKUS_CXF_MIN_CHUNK_SIZE Since Quarkus CXF : 2.6.0 quarkus.cxf.output-buffer-size int 8191 The size of the output stream response buffer in bytes. If a response is larger than this and no content-length is provided then the response will be chunked. Larger values may give slight performance increases for large responses, at the expense of more memory usage. Environment variable : QUARKUS_CXF_OUTPUT_BUFFER_SIZE Since Quarkus CXF : 2.6.0 quarkus.cxf.http-conduit-factory QuarkusCXFDefault , CXFDefault , HttpClientHTTPConduitFactory , URLConnectionHTTPConduitFactory Select the HTTPConduitFactory implementation for all clients except the ones that override this setting via quarkus.cxf.client."client-name".http-conduit-factory . QuarkusCXFDefault (default): if io.quarkiverse.cxf:quarkus-cxf-rt-transports-http-hc5 is present in class path, then its HTTPConduitFactory implementation will be used; otherwise this value is equivalent with URLConnectionHTTPConduitFactory (this may change, once issue #992 gets resolved in CXF) CXFDefault : the selection of HTTPConduitFactory implementation is left to CXF HttpClientHTTPConduitFactory : the HTTPConduitFactory will be set to an implementation always returning org.apache.cxf.transport.http.HttpClientHTTPConduit . This will use java.net.http.HttpClient as the underlying HTTP client. URLConnectionHTTPConduitFactory : the HTTPConduitFactory will be set to an implementation always returning org.apache.cxf.transport.http.URLConnectionHTTPConduit . This will use java.net.HttpURLConnection as the underlying HTTP client. Environment variable : QUARKUS_CXF_HTTP_CONDUIT_FACTORY Since Quarkus CXF : 2.3.0 quarkus.cxf.decoupled-endpoint-base string An URI base to use as a prefix of quarkus.cxf.client."client-name".decoupled-endpoint . You will typically want to set this to something like the following: quarkus.cxf.decoupled-endpoint-base = https://api.example.com:USD{quarkus.http.ssl-port}USD{quarkus.cxf.path} # or for plain HTTP quarkus.cxf.decoupled-endpoint-base = http://api.example.com:USD{quarkus.http.port}USD{quarkus.cxf.path} If you invoke your WS client from within a HTTP handler, you can leave this option unspecified and rather set it dynamically on the request context of your WS client using the org.apache.cxf.ws.addressing.decoupled.endpoint.base key. Here is an example how to do that from a RESTeasy handler method: import java.util.Map; import jakarta.inject.Inject; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.MediaType; import jakarta.ws.rs.core.UriInfo; import jakarta.xml.ws.BindingProvider; import io.quarkiverse.cxf.annotation.CXFClient; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/my-rest") public class MyRestEasyResource { @Inject @CXFClient("hello") HelloService helloService; @ConfigProperty(name = "quarkus.cxf.path") String quarkusCxfPath; @POST @Path("/hello") @Produces(MediaType.TEXT_PLAIN) public String hello(String body, @Context UriInfo uriInfo) throws IOException { // You may consider doing this only once if you are sure that your service is accessed // through a single hostname String decoupledEndpointBase = uriInfo.getBaseUriBuilder().path(quarkusCxfPath); Map>String, Object< requestContext = ((BindingProvider) helloService).getRequestContext(); requestContext.put("org.apache.cxf.ws.addressing.decoupled.endpoint.base", decoupledEndpointBase); return wsrmHelloService.hello(body); } } Environment variable : QUARKUS_CXF_DECOUPLED_ENDPOINT_BASE Since Quarkus CXF : 2.7.0 quarkus.cxf.logging.enabled-for clients , services , both , none none Specifies whether the message logging will be enabled for clients, services, both or none. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.enabled or quarkus.cxf.client."client-name".logging.enabled respectively. Environment variable : QUARKUS_CXF_LOGGING_ENABLED_FOR Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.pretty boolean false If true , the XML elements will be indented in the log; otherwise they will appear unindented. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.pretty or quarkus.cxf.client."client-name".logging.pretty respectively. Environment variable : QUARKUS_CXF_LOGGING_PRETTY Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.limit int 49152 A message length in bytes at which it is truncated in the log. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.limit or quarkus.cxf.client."client-name".logging.limit respectively. Environment variable : QUARKUS_CXF_LOGGING_LIMIT Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.in-mem-threshold long -1 A message length in bytes at which it will be written to disk. -1 is unlimited. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.in-mem-threshold or quarkus.cxf.client."client-name".logging.in-mem-threshold respectively. Environment variable : QUARKUS_CXF_LOGGING_IN_MEM_THRESHOLD Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.log-binary boolean false If true , binary payloads will be logged; otherwise they won't be logged. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.log-binary or quarkus.cxf.client."client-name".logging.log-binary respectively. Environment variable : QUARKUS_CXF_LOGGING_LOG_BINARY Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.log-multipart boolean true If true , multipart payloads will be logged; otherwise they won't be logged. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.log-multipart or quarkus.cxf.client."client-name".logging.log-multipart respectively. Environment variable : QUARKUS_CXF_LOGGING_LOG_MULTIPART Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.verbose boolean true If true , verbose logging will be enabled; otherwise it won't be enabled. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.verbose or quarkus.cxf.client."client-name".logging.verbose respectively. Environment variable : QUARKUS_CXF_LOGGING_VERBOSE Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.in-binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingInInterceptor whose content will not be logged unless log-binary is true . This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.in-binary-content-media-types or quarkus.cxf.client."client-name".logging.in-binary-content-media-types respectively. Environment variable : QUARKUS_CXF_LOGGING_IN_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.out-binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingOutInterceptor whose content will not be logged unless log-binary is true . This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.out-binary-content-media-types or quarkus.cxf.client."client-name".logging.out-binary-content-media-types respectively. Environment variable : QUARKUS_CXF_LOGGING_OUT_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingOutInterceptor and LoggingInInterceptor whose content will not be logged unless log-binary is true . This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.binary-content-media-types or quarkus.cxf.client."client-name".logging.binary-content-media-types respectively. Environment variable : QUARKUS_CXF_LOGGING_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.sensitive-element-names List of string A comma separated list of XML elements containing sensitive information to be masked in the log. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.sensitive-element-names or quarkus.cxf.client."client-name".logging.sensitive-element-names respectively. Environment variable : QUARKUS_CXF_LOGGING_SENSITIVE_ELEMENT_NAMES Since Quarkus CXF : 2.6.0 quarkus.cxf.logging.sensitive-protocol-header-names List of string A comma separated list of protocol headers containing sensitive information to be masked in the log. This setting can be overridden per client or service endpoint using quarkus.cxf.endpoint."/endpoint-path".logging.sensitive-protocol-header-names or quarkus.cxf.client."client-name".logging.sensitive-protocol-header-names respectively. Environment variable : QUARKUS_CXF_LOGGING_SENSITIVE_PROTOCOL_HEADER_NAMES Since Quarkus CXF : 2.6.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".includes List of string A comma separated list of glob patterns for selecting WSDL files which should be processed with wsdl2java tool. The paths are relative to src/main/resources or src/test/resources directories of the current Maven or Gradle module. The glob syntax is specified in io.quarkus.util.GlobUtil . Examples: calculator.wsdl,fruits.wsdl will match src/main/resources/calculator.wsdl and src/main/resources/fruits.wsdl under the current Maven or Gradle module, but will not match anything like src/main/resources/subdir/calculator.wsdl my-*-service.wsdl will match src/main/resources/my-foo-service.wsdl and src/main/resources/my-bar-service.wsdl **.wsdl will match any of the above There is a separate wsdl2java execution for each of the matching WSDL files. If you need different additional-params for each WSDL file, you may want to define a separate named parameter set for each one of them. Here is an example: # Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl # Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts File extensions File extensions other than .wsdl will work during normal builds, but changes in the matching files may get overseen in Quarkus dev mode. We recommend that you always use the .wsdl extension. There is no default value for this option, so wsdl2java code generation is disabled by default. Specifying quarkus.cxf.codegen.wsdl2java.my-name.excludes without setting any includes will cause a build time error. Make sure that the file sets selected by quarkus.cxf.codegen.wsdl2java.includes and quarkus.cxf.codegen.wsdl2java.[whatever-name].includes do not overlap. Otherwise a build time exception will be thrown. The files from src/main/resources selected by includes and excludes are automatically included in native image and therefore you do not need to include them via quarkus.cxf.wsdl-path (deprecated) or quarkus.native.resources.includes/excludes . Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__INCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".excludes List of string A comma separated list of path patterns for selecting WSDL files which should not be processed with wsdl2java tool. The paths are relative to src/main/resources or src/test/resources directories of the current Maven or Gradle module. Same syntax as includes . Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__EXCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".output-directory string A directory into which the generated files will be written, either absolute or relative to the current Maven or Gradle module directory. The default value is build tool dependent: for Maven, it is typically target/generated-sources/wsdl2java , while for Gradle it is build/classes/java/quarkus-generated-sources/wsdl2java . Quarkus tooling is only able to set up the default value as a source folder for the given build tool. If you set this to a custom path it is up to you to make sure that your build tool recognizes the path a as source folder. Also, if you choose a path outside target directory for Maven or outside build directory for Gradle, you need to take care for cleaning stale resources generated by builds. E.g. if you change the value of package-names option from org.foo to org.bar you need to take care for the removal of the removal of the old package org.foo . This will be passed as option -d to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__OUTPUT_DIRECTORY Since Quarkus CXF : 2.6.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".package-names List of string A comma separated list of tokens; each token can be one of the following: A Java package under which the Java source files should be generated A string of the form namespaceURI=packageName - in this case the entities coming from the given namespace URI will be generated under the given Java package. This will be passed as option -p to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__PACKAGE_NAMES Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".exclude-namespace-uris List of string A comma separated list of WSDL schema namespace URIs to ignore when generating Java code. This will be passed as option -nexclude to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__EXCLUDE_NAMESPACE_URIS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".service-name string The WSDL service name to use for the generated code. This will be passed as option -sn to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__SERVICE_NAME Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".bindings List of string A list of paths pointing at JAXWS or JAXB binding files or XMLBeans context files. The path to be either absolute or relative to the current Maven or Gradle module. This will be passed as option -b to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__BINDINGS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".validate boolean false If true , WSDLs are validated before processing; otherwise the WSDLs are not validated. This will be passed as option -validate to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__VALIDATE Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".wsdl-location string Specifies the value of the @WebServiceClient annotation's wsdlLocation property. This will be passed as option -wsdlLocation to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__WSDL_LOCATION Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".xjc List of string A comma separated list of XJC extensions to enable. The following extensions are available through io.quarkiverse.cxf:quarkus-cxf-xjc-plugins dependency: bg - generate getX() methods for boolean fields instead of isX() bgi - generate both isX() and getX() methods for boolean fields dv - initialize fields mapped from elements/attributes with their default values javadoc - generates JavaDoc based on xsd:documentation property-listener - add a property listener and the code for triggering the property change events to setter methods ts - generate toString() methods wsdlextension - generate WSDL extension methods in root classes These values correspond to -wsdl2java options -xjc-Xbg , -xjc-Xbgi , -xjc-Xdv , -xjc-Xjavadoc , -xjc-Xproperty-listener , -xjc-Xts and -xjc-Xwsdlextension respectively. Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__XJC Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".exception-super string java.lang.Exception A fully qualified class name to use as a superclass for fault beans generated from wsdl:fault elements This will be passed as option -exceptionSuper to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__EXCEPTION_SUPER Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".async-methods List of string A comma separated list of SEI methods for which asynchronous sibling methods should be generated; similar to enableAsyncMapping in a JAX-WS binding file This will be passed as option -asyncMethods to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__ASYNC_METHODS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".bare-methods List of string A comma separated list of SEI methods for which wrapper style sibling methods should be generated; similar to enableWrapperStyle in JAX-WS binding file This will be passed as option -bareMethods to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__BARE_METHODS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".mime-methods List of string A comma separated list of SEI methods for which mime:content mapping should be enabled; similar to enableMIMEContent in JAX-WS binding file This will be passed as option -mimeMethods to wsdl2java Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__MIME_METHODS Since Quarkus CXF : 2.4.0 quarkus.cxf.codegen.wsdl2java."named-parameter-sets".additional-params List of string A comma separated list of additional command line parameters that should be passed to CXF wsdl2java tool along with the files selected by includes and excludes . Example: -keep,-dex,false . Check wsdl2java documentation for all supported options. Environment variable : QUARKUS_CXF_CODEGEN_WSDL2JAVA__NAMED_PARAMETER_SETS__ADDITIONAL_PARAMS Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws."named-parameter-sets".includes List of string A comma separated list of glob patterns for selecting class names which should be processed with java2ws tool. The glob syntax is specified in io.quarkus.util.GlobUtil . The patterns are matched against fully qualified class names, such as org.acme.MyClass . The universe of class names to which includes and excludes are applied is defined as follows: 1. Only classes visible in Jandex are considered. 2. From those, only the ones annotated with @WebService are selected. Examples: Let's say that the application contains two classes annotated with @WebService and that both are visible in Jandex. Their names are org.foo.FruitWebService and org.bar.HelloWebService . Then quarkus.cxf.java2ws.includes = **.*WebService will match both class names quarkus.cxf.java2ws.includes = org.foo.* will match only org.foo.FruitWebService There is a separate java2ws execution for each of the matching class names. If you need different additional-params for each class, you may want to define a separate named parameter set for each one of them. Here is an example: # Parameters for the foo package quarkus.cxf.java2ws.foo-params.includes = org.foo.* quarkus.cxf.java2ws.foo-params.additional-params = -servicename,FruitService # Parameters for the bar package quarkus.cxf.java2ws.bar-params.includes = org.bar.* quarkus.cxf.java2ws.bar-params.additional-params = -servicename,HelloService There is no default value for this option, so java2ws WSDL generation is effectively disabled by default. Specifying quarkus.cxf.java2ws.excludes without setting any includes will cause a build time error. Make sure that the class names selected by quarkus.cxf.java2ws.includes and quarkus.cxf.java2ws.[whatever-name].includes do not overlap. Otherwise a build time exception will be thrown. If you would like to include the generated WSDL files in native image, you need to add them yourself using quarkus.native.resources.includes/excludes . Environment variable : QUARKUS_CXF_JAVA2WS__NAMED_PARAMETER_SETS__INCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws."named-parameter-sets".excludes List of string A comma separated list of glob patterns for selecting java class names which should not be processed with java2ws tool. Same syntax as includes . Environment variable : QUARKUS_CXF_JAVA2WS__NAMED_PARAMETER_SETS__EXCLUDES Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws."named-parameter-sets".additional-params List of string A comma separated list of additional command line parameters that should be passed to CXF java2ws tool along with the files selected by includes and excludes . Example: -portname,12345 . Check java2ws documentation for all supported options. Supported options Currently, only options related to generation of WSDL from Java are supported. Environment variable : QUARKUS_CXF_JAVA2WS__NAMED_PARAMETER_SETS__ADDITIONAL_PARAMS Since Quarkus CXF : 2.0.0 quarkus.cxf.java2ws."named-parameter-sets".wsdl-name-template string %CLASSES_DIR%/wsdl/%SIMPLE_CLASS_NAME%.wsdl A template for the names of generated WSDL files. There are 4 place holders, which can be used in the template: %SIMPLE_CLASS_NAME% - the simple class name of the Java class from which we are generating %FULLY_QUALIFIED_CLASS_NAME% - the fully qualified name from which we are generating with all dots are replaced replaced by underscores %TARGET_DIR% - the target directory of the current module of the current build tool; typically target for Maven and build for Gradle. %CLASSES_DIR% - the compiler output directory of the current module of the current build tool; typically target/classes for Maven and build/classes for Gradle. Environment variable : QUARKUS_CXF_JAVA2WS__NAMED_PARAMETER_SETS__WSDL_NAME_TEMPLATE Since Quarkus CXF : 2.0.0 quarkus.cxf.client."client-name".service-interface string The client service interface class name Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SERVICE_INTERFACE Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".alternative boolean false Indicates whether this is an alternative proxy client configuration. If true, then this configuration is ignored when configuring a client without annotation @CXFClient . Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ALTERNATIVE Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".native.runtime-initialized boolean false If true , the client dynamic proxy class generated by native compiler will be initialized at runtime; otherwise the proxy class will be initialized at build time. Setting this to true makes sense if your service endpoint interface references some class initialized at runtime in its method signatures. E.g. Say, your service interface has method int add(Operands o) and the Operands class was requested to be initialized at runtime. Then, without setting this configuration parameter to true , the native compiler will throw an exception saying something like Classes that should be initialized at run time got initialized during image building: org.acme.Operands ... jdk.proxy<some-number>.USDProxy<some-number> caused initialization of this class . jdk.proxy<some-number>.USDProxy<some-number> is the proxy class generated by the native compiler. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__NATIVE_RUNTIME_INITIALIZED Since Quarkus CXF : 2.0.0 quarkus.cxf.endpoint."/endpoint-path".implementor string The service endpoint implementation class Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__IMPLEMENTOR Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".wsdl string The service endpoint WSDL path Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__WSDL Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".soap-binding string The URL of the SOAP Binding, should be one of four values: http://schemas.xmlsoap.org/wsdl/soap/http for SOAP11HTTP_BINDING http://schemas.xmlsoap.org/wsdl/soap/http?mtom=true for SOAP11HTTP_MTOM_BINDING http://www.w3.org/2003/05/soap/bindings/HTTP/ for SOAP12HTTP_BINDING http://www.w3.org/2003/05/soap/bindings/HTTP/?mtom=true for SOAP12HTTP_MTOM_BINDING Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SOAP_BINDING Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".published-endpoint-url string The published service endpoint URL Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__PUBLISHED_ENDPOINT_URL Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".logging.enabled true , false , pretty If true or pretty , the message logging will be enabled; otherwise it will not be enabled. If the value is pretty (since 2.7.0), the pretty attribute will effectively be set to true . The default is given by quarkus.cxf.logging.enabled-for . Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_ENABLED Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.pretty boolean If true , the XML elements will be indented in the log; otherwise they will appear unindented. The default is given by quarkus.cxf.logging.pretty Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_PRETTY Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.limit int A message length in bytes at which it is truncated in the log. The default is given by quarkus.cxf.logging.limit Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_LIMIT Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.in-mem-threshold long A message length in bytes at which it will be written to disk. -1 is unlimited. The default is given by quarkus.cxf.logging.in-mem-threshold Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_IN_MEM_THRESHOLD Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.log-binary boolean If true , binary payloads will be logged; otherwise they won't be logged. The default is given by quarkus.cxf.logging.log-binary Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_LOG_BINARY Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.log-multipart boolean If true , multipart payloads will be logged; otherwise they won't be logged. The default is given by quarkus.cxf.logging.log-multipart Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_LOG_MULTIPART Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.verbose boolean If true , verbose logging will be enabled; otherwise it won't be enabled. The default is given by quarkus.cxf.logging.verbose Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_VERBOSE Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.in-binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingInInterceptor whose content will not be logged unless log-binary is true . The default is given by quarkus.cxf.logging.in-binary-content-media-types Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_IN_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.out-binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingOutInterceptor whose content will not be logged unless log-binary is true . The default is given by quarkus.cxf.logging.out-binary-content-media-types Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_OUT_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingOutInterceptor and LoggingInInterceptor whose content will not be logged unless log-binary is true . The default is given by quarkus.cxf.logging.binary-content-media-types Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.sensitive-element-names List of string A comma separated list of XML elements containing sensitive information to be masked in the log. The default is given by quarkus.cxf.logging.sensitive-element-names Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_SENSITIVE_ELEMENT_NAMES Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".logging.sensitive-protocol-header-names List of string A comma separated list of protocol headers containing sensitive information to be masked in the log. The default is given by quarkus.cxf.logging.sensitive-protocol-header-names Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__LOGGING_SENSITIVE_PROTOCOL_HEADER_NAMES Since Quarkus CXF : 2.6.0 quarkus.cxf.endpoint."/endpoint-path".features List of string A comma-separated list of fully qualified CXF Feature class names or named CDI beans. Examples: quarkus.cxf.endpoint."/hello".features = org.apache.cxf.ext.logging.LoggingFeature quarkus.cxf.endpoint."/fruit".features = #myCustomLoggingFeature In the second case, the #myCustomLoggingFeature bean can be produced as follows: import org.apache.cxf.ext.logging.LoggingFeature; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.inject.Produces; class Producers { @Produces @ApplicationScoped LoggingFeature myCustomLoggingFeature() { LoggingFeature loggingFeature = new LoggingFeature(); loggingFeature.setPrettyLogging(true); return loggingFeature; } } Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__FEATURES Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".handlers List of string The comma-separated list of Handler classes Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__HANDLERS Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".in-interceptors List of string The comma-separated list of InInterceptor classes Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__IN_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".out-interceptors List of string The comma-separated list of OutInterceptor classes Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__OUT_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".out-fault-interceptors List of string The comma-separated list of OutFaultInterceptor classes Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__OUT_FAULT_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".in-fault-interceptors List of string The comma-separated list of InFaultInterceptor classes Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__IN_FAULT_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.endpoint."/endpoint-path".schema-validation.enabled-for in , request , out , response , both , none Select for which messages XML Schema validation should be enabled. If not specified, no XML Schema validation will be enforced unless it is enabled by other means, such as @org.apache.cxf.annotations.SchemaValidation or @org.apache.cxf.annotations.EndpointProperty(key = "schema-validation-enabled", value = "true") annotations. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SCHEMA_VALIDATION_ENABLED_FOR Since Quarkus CXF : 2.7.0 quarkus.cxf.client."client-name".wsdl string A URL, resource path or local filesystem path pointing to a WSDL document to use when generating the service proxy of this client. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__WSDL Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".soap-binding string The URL of the SOAP Binding, should be one of four values: http://schemas.xmlsoap.org/wsdl/soap/http for SOAP11HTTP_BINDING http://schemas.xmlsoap.org/wsdl/soap/http?mtom=true for SOAP11HTTP_MTOM_BINDING http://www.w3.org/2003/05/soap/bindings/HTTP/ for SOAP12HTTP_BINDING http://www.w3.org/2003/05/soap/bindings/HTTP/?mtom=true for SOAP12HTTP_MTOM_BINDING Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SOAP_BINDING Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".client-endpoint-url string The client endpoint URL Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CLIENT_ENDPOINT_URL Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".endpoint-namespace string The client endpoint namespace Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ENDPOINT_NAMESPACE Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".endpoint-name string The client endpoint name Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ENDPOINT_NAME Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".username string The username for HTTP Basic authentication Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__USERNAME Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".password string The password for HTTP Basic authentication Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__PASSWORD Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".secure-wsdl-access boolean false If true , then the Authentication header will be sent preemptively when requesting the WSDL, as long as the username is set; otherwise the WSDL will be requested anonymously. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURE_WSDL_ACCESS Since Quarkus CXF : 2.7.0 quarkus.cxf.client."client-name".logging.enabled true , false , pretty If true or pretty , the message logging will be enabled; otherwise it will not be enabled. If the value is pretty (since 2.7.0), the pretty attribute will effectively be set to true . The default is given by quarkus.cxf.logging.enabled-for . Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_ENABLED Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.pretty boolean If true , the XML elements will be indented in the log; otherwise they will appear unindented. The default is given by quarkus.cxf.logging.pretty Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_PRETTY Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.limit int A message length in bytes at which it is truncated in the log. The default is given by quarkus.cxf.logging.limit Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_LIMIT Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.in-mem-threshold long A message length in bytes at which it will be written to disk. -1 is unlimited. The default is given by quarkus.cxf.logging.in-mem-threshold Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_IN_MEM_THRESHOLD Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.log-binary boolean If true , binary payloads will be logged; otherwise they won't be logged. The default is given by quarkus.cxf.logging.log-binary Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_LOG_BINARY Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.log-multipart boolean If true , multipart payloads will be logged; otherwise they won't be logged. The default is given by quarkus.cxf.logging.log-multipart Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_LOG_MULTIPART Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.verbose boolean If true , verbose logging will be enabled; otherwise it won't be enabled. The default is given by quarkus.cxf.logging.verbose Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_VERBOSE Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.in-binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingInInterceptor whose content will not be logged unless log-binary is true . The default is given by quarkus.cxf.logging.in-binary-content-media-types Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_IN_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.out-binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingOutInterceptor whose content will not be logged unless log-binary is true . The default is given by quarkus.cxf.logging.out-binary-content-media-types Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_OUT_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.binary-content-media-types List of string A comma separated list of additional binary media types to add to the default values in the LoggingOutInterceptor and LoggingInInterceptor whose content will not be logged unless log-binary is true . The default is given by quarkus.cxf.logging.binary-content-media-types Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_BINARY_CONTENT_MEDIA_TYPES Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.sensitive-element-names List of string A comma separated list of XML elements containing sensitive information to be masked in the log. The default is given by quarkus.cxf.logging.sensitive-element-names Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_SENSITIVE_ELEMENT_NAMES Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".logging.sensitive-protocol-header-names List of string A comma separated list of protocol headers containing sensitive information to be masked in the log. The default is given by quarkus.cxf.logging.sensitive-protocol-header-names Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__LOGGING_SENSITIVE_PROTOCOL_HEADER_NAMES Since Quarkus CXF : 2.6.0 quarkus.cxf.client."client-name".features List of string A comma-separated list of fully qualified CXF Feature class names. Example: quarkus.cxf.endpoint."/my-endpoint".features = org.apache.cxf.ext.logging.LoggingFeature Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__FEATURES Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".handlers List of string The comma-separated list of Handler classes Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__HANDLERS Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".in-interceptors List of string The comma-separated list of InInterceptor classes Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__IN_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".out-interceptors List of string The comma-separated list of OutInterceptor classes Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__OUT_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".out-fault-interceptors List of string The comma-separated list of OutFaultInterceptor classes Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__OUT_FAULT_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".in-fault-interceptors List of string The comma-separated list of InFaultInterceptor classes Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__IN_FAULT_INTERCEPTORS Since Quarkus CXF : 1.0.0 quarkus.cxf.client."client-name".connection-timeout long 30000 Specifies the amount of time, in milliseconds, that the consumer will attempt to establish a connection before it times out. 0 is infinite. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CONNECTION_TIMEOUT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".receive-timeout long 60000 Specifies the amount of time, in milliseconds, that the consumer will wait for a response before it times out. 0 is infinite. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__RECEIVE_TIMEOUT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".connection-request-timeout long 60000 Specifies the amount of time, in milliseconds, used when requesting a connection from the connection manager(if appliable). 0 is infinite. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CONNECTION_REQUEST_TIMEOUT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".auto-redirect boolean false Specifies if the consumer will automatically follow a server issued redirection. (name is not part of standard) Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__AUTO_REDIRECT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".max-retransmits int -1 Specifies the maximum amount of retransmits that are allowed for redirects. Retransmits for authorization is included in the retransmit count. Each redirect may cause another retransmit for a UNAUTHORIZED response code, ie. 401. Any negative number indicates unlimited retransmits, although, loop protection is provided. The default is unlimited. (name is not part of standard) Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__MAX_RETRANSMITS Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".allow-chunking boolean true If true, the client is free to use chunking streams if it wants, but it is not required to use chunking streams. If false, the client must use regular, non-chunked requests in all cases. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ALLOW_CHUNKING Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".chunking-threshold int 4096 If AllowChunking is true, this sets the threshold at which messages start getting chunked. Messages under this limit do not get chunked. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CHUNKING_THRESHOLD Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".chunk-length int -1 Specifies the chunk length for a HttpURLConnection. This value is used in java.net.HttpURLConnection.setChunkedStreamingMode(int chunklen). chunklen indicates the number of bytes to write in each chunk. If chunklen is less than or equal to zero, a default value will be used. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CHUNK_LENGTH Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".accept string Specifies the MIME types the client is prepared to handle (e.g., HTML, JPEG, GIF, etc.) Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ACCEPT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".accept-language string Specifies the language the client desires (e.g., English, French, etc.) Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ACCEPT_LANGUAGE Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".accept-encoding string Specifies the encoding the client is prepared to handle (e.g., gzip) Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__ACCEPT_ENCODING Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".content-type string Specifies the content type of the stream being sent in a post request. (this should be text/xml for web services, or can be set to application/x-www-form-urlencoded if the client is sending form data). Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CONTENT_TYPE Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".host string Specifies the Internet host and port number of the resource on which the request is being invoked. This is sent by default based upon the URL. Certain DNS scenarios or application designs may request you to set this, but typically it is not required. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__HOST Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".connection close , keep-alive keep-alive The connection disposition. If close the connection to the server is closed after each request/response dialog. If Keep-Alive the client requests the server to keep the connection open, and if the server honors the keep alive request, the connection is reused. Many servers and proxies do not honor keep-alive requests. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CONNECTION Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".cache-control string Most commonly used to specify no-cache, however the standard supports a dozen or so caching related directives for requests Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__CACHE_CONTROL Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".version string auto HTTP Version used for the connection. The default value auto will use whatever the default is for the HTTPConduit implementation defined via quarkus.cxf.client."client-name".http-conduit-factory . Other possible values: 1.1 , 2 . Some of these values might be unsupported by some HTTPConduit implementations. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__VERSION Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".browser-type string The value of the User-Agent HTTP header. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__BROWSER_TYPE Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".decoupled-endpoint string An URI path (starting with / ) or a full URI for the receipt of responses over a separate provider consumer connection. If the value starts with / , then it is prefixed with the base URI configured via quarkus.cxf.client."client-name".decoupled-endpoint-base before being used as a value for the WS-Addressing ReplyTo message header. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__DECOUPLED_ENDPOINT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".proxy-server string Specifies the address of proxy server if one is used. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__PROXY_SERVER Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".proxy-server-port int Specifies the port number used by the proxy server. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__PROXY_SERVER_PORT Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".non-proxy-hosts string Specifies the list of hostnames that will not use the proxy configuration. Examples: localhost - a single hostname localhost|www.google.com - two hostnames that will not use the proxy configuration localhost|www.google.*|*.apache.org - hostname patterns Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__NON_PROXY_HOSTS Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".proxy-server-type http , socks http Specifies the type of the proxy server. Can be either HTTP or SOCKS. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__PROXY_SERVER_TYPE Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".proxy-username string Username for the proxy authentication Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__PROXY_USERNAME Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".proxy-password string Password for the proxy authentication Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__PROXY_PASSWORD Since Quarkus CXF : 2.2.3 quarkus.cxf.client."client-name".http-conduit-factory QuarkusCXFDefault , CXFDefault , HttpClientHTTPConduitFactory , URLConnectionHTTPConduitFactory Select the HTTPConduitFactory implementation for this client. QuarkusCXFDefault (default): if io.quarkiverse.cxf:quarkus-cxf-rt-transports-http-hc5 is present in class path, then its HTTPConduitFactory implementation will be used; otherwise this value is equivalent with URLConnectionHTTPConduitFactory (this may change, once issue #992 gets resolved in CXF) CXFDefault : the selection of HTTPConduitFactory implementation is left to CXF HttpClientHTTPConduitFactory : the HTTPConduitFactory for this client will be set to an implementation always returning org.apache.cxf.transport.http.HttpClientHTTPConduit . This will use java.net.http.HttpClient as the underlying HTTP client. URLConnectionHTTPConduitFactory : the HTTPConduitFactory for this client will be set to an implementation always returning org.apache.cxf.transport.http.URLConnectionHTTPConduit . This will use java.net.HttpURLConnection as the underlying HTTP client. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__HTTP_CONDUIT_FACTORY Since Quarkus CXF : 2.3.0 quarkus.cxf.client."client-name".key-store string The key store location for this client. The resource is first looked up in the classpath, then in the file system. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__KEY_STORE Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".key-store-password string The key store password Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__KEY_STORE_PASSWORD Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".key-store-type string JKS The type of the key store. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__KEY_STORE_TYPE Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".key-password string The key password. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__KEY_PASSWORD Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".trust-store string The trust store location for this client. The resource is first looked up in the classpath, then in the file system. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__TRUST_STORE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".trust-store-password string The trust store password. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__TRUST_STORE_PASSWORD Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".trust-store-type string JKS The type of the trust store. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__TRUST_STORE_TYPE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".hostname-verifier string Can be one of the following: One of the well known values: AllowAllHostnameVerifier , HttpsURLConnectionDefaultHostnameVerifier A fully qualified class name implementing javax.net.ssl.HostnameVerifier to look up in the CDI container. A bean name prefixed with # that will be looked up in the CDI container; example: #myHostnameVerifier If not specified, then the creation of the HostnameVerifier is delegated to CXF, which boils down to org.apache.cxf.transport.https.httpclient.DefaultHostnameVerifier with the default org.apache.cxf.transport.https.httpclient.PublicSuffixMatcherLoader as returned from PublicSuffixMatcherLoader.getDefault() . Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__HOSTNAME_VERIFIER Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".schema-validation.enabled-for in , request , out , response , both , none Select for which messages XML Schema validation should be enabled. If not specified, no XML Schema validation will be enforced unless it is enabled by other means, such as @org.apache.cxf.annotations.SchemaValidation or @org.apache.cxf.annotations.EndpointProperty(key = "schema-validation-enabled", value = "true") annotations. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SCHEMA_VALIDATION_ENABLED_FOR Since Quarkus CXF : 2.7.0 5.2. Metrics Feature Collect metrics using Micrometer . Important Unlike CXF Metrics feature , this Quarkus CXF extension does not support Dropwizard Metrics . Only Micrometer is supported. 5.2.1. Maven coordinates Create a new project using quarkus-cxf-rt-features-metrics on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-metrics</artifactId> </dependency> 5.2.2. Usage The integration of CXF into the Quarkus Micrometer ecosystem is implemented using io.quarkiverse.cxf.metrics.QuarkusCxfMetricsFeature . As long as your application depends on quarkus-cxf-rt-features-metrics , an instance of QuarkusCxfMetricsFeature is created internally and enabled by default for all clients and service endpoints created by Quarkus CXF. You can disable it via quarkus.cxf.metrics.enabled-for , quarkus.cxf.client."client-name".metrics.enabled and quarkus.cxf.endpoint."/endpoint-path".metrics.enabled properties documented below. 5.2.2.1. Runnable example There is an integration test covering Micrometer Metrics in the Quarkus CXF source tree. Unsurprisingly, it depends on quarkus-cxf-rt-features-metrics pom.xml <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-metrics</artifactId> </dependency> It is using quarkus-micrometer-registry-prometheus extension to export the metrics in JSON format and for Prometheus: pom.xml <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> The following configuration is needed to be able to inspect the collected metrics over a REST endpoint: application.properties quarkus.micrometer.export.json.enabled = true quarkus.micrometer.export.json.path = metrics/json quarkus.micrometer.export.prometheus.path = metrics/prometheus Having all the above in place, you can start the application in Dev mode: USD mvn quarkus:dev Now send a request to the HelloService : USD curl \ -d '<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:helloResponse xmlns:ns2="http://it.server.metrics.cxf.quarkiverse.io/"><return>Hello Joe!</return></ns2:helloResponse></soap:Body></soap:Envelope>' \ -H 'Content-Type: text/xml' \ -X POST \ http://localhost:8080/metrics/client/hello After that, you should see some metrics under cxf.server.requests in the output of the endpoint you configured above: USD curl http://localhost:8080/q/metrics/json metrics: { ... "cxf.server.requests": { "count;exception=None;faultCode=None;method=POST;operation=hello;outcome=SUCCESS;status=200;uri=/soap/hello": 2, "elapsedTime;exception=None;faultCode=None;method=POST;operation=hello;outcome=SUCCESS;status=200;uri=/soap/hello": 64.0 }, ... } 5.2.3. Configuration Configuration property fixed at build time. All other configuration properties are overridable at runtime. Configuration property Type Default quarkus.cxf.metrics.enabled-for clients , services , both , none both Specifies whether the metrics collection will be enabled for clients, services, both or none. This global setting can be overridden per client or service endpoint using the quarkus.cxf.client."client-name".metrics.enabled or quarkus.cxf.endpoint."/endpoint-path".metrics.enabled option respectively. Environment variable : QUARKUS_CXF_METRICS_ENABLED_FOR Since Quarkus CXF : 2.7.0 quarkus.cxf.client."client-name".metrics.enabled boolean true If true and if quarkus.cxf.metrics.enabled-for is set to both or clients then the MetricsFeature will be added to this client; otherwise the feature will not be added to this client. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__METRICS_ENABLED Since Quarkus CXF : 2.7.0 quarkus.cxf.endpoint."/endpoint-path".metrics.enabled boolean true If true and if quarkus.cxf.metrics.enabled-for is set to both or services then the MetricsFeature will be added to this service endpoint; otherwise the feature will not be added to this service endpoint. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__METRICS_ENABLED Since Quarkus CXF : 2.7.0 5.3. OpenTelemetry Generate OpenTelemetry traces . Important OpenTelemetry Metrics and Logging are not supported yet on neither Quarkus nor CXF side, hence Quarkus CXF cannot support them either. Therefore, tracing is the only OpenTelemetry feature supported by this extension. 5.3.1. Maven coordinates Create a new project using quarkus-cxf-integration-tracing-opentelemetry on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-integration-tracing-opentelemetry</artifactId> </dependency> 5.3.2. Usage This extension builds on top of org.apache.cxf.tracing.opentelemetry.OpenTelemetryFeature (for service endpoints) and org.apache.cxf.tracing.opentelemetry.OpenTelemetryClientFeature (for clients). Instances of these are created and configured internally using the instance of io.opentelemetry.api.OpenTelemetry provided by Quarkus OpenTelemetry . The tracing is enabled by default for all clients and service endpoints created by Quarkus CXF, unless you disable it explicitly via quarkus.cxf.otel.enabled-for , quarkus.cxf.client."client-name".otel.enabled or quarkus.cxf.endpoint."/endpoint-path".otel.enabled . 5.3.2.1. Runnable example There is an integration test covering OpenTelemetry in the Quarkus CXF source tree. It is using InMemorySpanExporter from io.opentelemetry:opentelemetry-sdk-testing , so that the spans can be inspected from tests easily. Refer to Quarkus OpenTelemetry guide for information about other supported span exporters and collectors. 5.3.3. Configuration Configuration property fixed at build time. All other configuration properties are overridable at runtime. Configuration property Type Default quarkus.cxf.otel.enabled-for clients , services , both , none both Specifies whether the OpenTelemetry tracing will be enabled for clients, services, both or none. This global setting can be overridden per client or service endpoint using the quarkus.cxf.client."client-name".otel.enabled or quarkus.cxf.endpoint."/endpoint-path".otel.enabled option respectively. Environment variable : QUARKUS_CXF_OTEL_ENABLED_FOR Since Quarkus CXF : 2.7.0 quarkus.cxf.client."client-name".otel.enabled boolean true If true and if quarkus.cxf.otel.enabled-for is set to both or clients then the OpenTelemetryClientFeature will be added to this client; otherwise the feature will not be added to this client. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__OTEL_ENABLED Since Quarkus CXF : 2.7.0 quarkus.cxf.endpoint."/endpoint-path".otel.enabled boolean true If true and if quarkus.cxf.otel.enabled-for is set to both or services then the OpenTelemetryFeature will be added to this service endpoint; otherwise the feature will not be added to this service endpoint. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__OTEL_ENABLED Since Quarkus CXF : 2.7.0 5.4. WS-Security Provides CXF framework's WS-Security implementation allowing you to: Pass authentication tokens between services Encrypt messages or parts of messages Sign messages Timestamp messages 5.4.1. Maven coordinates Create a new project using quarkus-cxf-rt-ws-security on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-ws-security</artifactId> </dependency> 5.4.2. Supported standards WS-Security WS-SecurityPolicy 5.4.3. Usage The CXF framework's WS-Security (WSS) implementation is based on WSS4J . It can be activated in two ways: By using WS-SecurityPolicy By adding WSS4J interceptors to your clients and service endpoints. WS-SecurityPolicy is preferable because in that way, the security requirements become a part of the WSDL contract. That in turn greatly simplifies not only the implementation of clients and service endpoints but also the interoperability between vendors. Nevertheless, if you leverage WS-SecurityPolicy, CXF sets up the WSS4J interceptors under the hood for you. We won't explain the manual approach with WSS4J interceptors in detail here, but you can still refer to our WS-Security integration test as an example. 5.4.3.1. WS-Security via WS-SecurityPolicy Tip The sample code snippets used in this section come from the WS-SecurityPolicy integration test in the source tree of Quarkus CXF Let's say our aim is to ensure that the communication between the client and service is confidential (through encryption) and that the message has not been tampered with (through digital signatures). We also want to assure that the clients are who they claim to be by authenticating themselves by X.509 certificates. We can express all these requirements in a single {link-quarkus-cxf-source-tree-base}/integration-tests/ws-security-policy/src/main/resources/encrypt-sign-policy.xml[WS-SecurityPolicy document]: encrypt-sign-policy.xml <?xml version="1.0" encoding="UTF-8" ?> <wsp:Policy wsu:Id="SecurityServiceEncryptThenSignPolicy" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702"> <wsp:ExactlyOne> <wsp:All> 1 <sp:AsymmetricBinding xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702"> <wsp:Policy> 2 <sp:InitiatorToken> <wsp:Policy> <sp:X509Token sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient"> <wsp:Policy> <sp:WssX509V3Token11/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:InitiatorToken> <sp:RecipientToken> <wsp:Policy> <sp:X509Token sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/Never"> <wsp:Policy> <sp:WssX509V3Token11/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:RecipientToken> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic256/> </wsp:Policy> </sp:AlgorithmSuite> <sp:Layout> <wsp:Policy> <sp:Strict/> </wsp:Policy> </sp:Layout> <sp:IncludeTimestamp/> <sp:ProtectTokens/> <sp:OnlySignEntireHeadersAndBody/> <sp:EncryptBeforeSigning/> </wsp:Policy> </sp:AsymmetricBinding> 3 <sp:SignedParts xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <sp:Body/> </sp:SignedParts> 4 <sp:EncryptedParts xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <sp:Body/> </sp:EncryptedParts> <sp:Wss10 xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <sp:MustSupportRefIssuerSerial/> </wsp:Policy> </sp:Wss10> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> 1 AsymmetricBinding specifies the use of asymmetric (public/private key) cryptography for securing the communication between two parties 2 InitiatorToken indicates that the initiator (sender) of the message will use an X.509 certificate token that must always be provided to the recipient. 3 SignedParts specifies which parts of the SOAP message must be signed to ensure their integrity. 4 EncryptedParts specifies the parts of the SOAP message that must be encrypted to ensure their confidentiality. We set this policy on the Service Endpoint Interface (SEI) {link-quarkus-cxf-source-tree-base}/integration-tests/ws-security-policy/src/main/java/io/quarkiverse/cxf/it/security/policy/EncryptSignPolicyHelloService.java#L11[EncryptSignPolicyHelloService] using @org.apache.cxf.annotations.Policy annotation: EncryptSignPolicyHelloService.java @WebService(serviceName = "EncryptSignPolicyHelloService") @Policy(placement = Policy.Placement.BINDING, uri = "encrypt-sign-policy.xml") public interface EncryptSignPolicyHelloService extends AbstractHelloService { ... } On the first sight, setting the policy on the SEI should suffice to enforce it on both the service and all clients generated from the SEI or from the WSDL served by the service. However, that's not all. Security keys, usernames, passwords and other kinds of confidental information cannot be exposed in a public policy. Those have to be set in the configuration. Let's do it for the service first: application.properties # A service with encrypt-sign-policy.xml set quarkus.cxf.endpoint."/helloEncryptSign".implementor = io.quarkiverse.cxf.it.security.policy.EncryptSignPolicyHelloServiceImpl # can be jks or pkcs12 - set from Maven profiles in this test keystore.type = USD{keystore.type} # Signature settings quarkus.cxf.endpoint."/helloEncryptSign".security.signature.username = bob quarkus.cxf.endpoint."/helloEncryptSign".security.signature.password = bob-keystore-password quarkus.cxf.endpoint."/helloEncryptSign".security.signature.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint."/helloEncryptSign".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.type" = USD{keystore.type} quarkus.cxf.endpoint."/helloEncryptSign".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.password" = bob-keystore-password quarkus.cxf.endpoint."/helloEncryptSign".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = bob quarkus.cxf.endpoint."/helloEncryptSign".security.signature.properties."org.apache.ws.security.crypto.merlin.file" = bob-keystore.USD{keystore.type} # Encryption settings quarkus.cxf.endpoint."/helloEncryptSign".security.encryption.username = alice quarkus.cxf.endpoint."/helloEncryptSign".security.encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint."/helloEncryptSign".security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.type" = USD{keystore.type} quarkus.cxf.endpoint."/helloEncryptSign".security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = bob-keystore-password quarkus.cxf.endpoint."/helloEncryptSign".security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = bob quarkus.cxf.endpoint."/helloEncryptSign".security.encryption.properties."org.apache.ws.security.crypto.merlin.file" = bob-keystore.USD{keystore.type} Similar setup is necessary on the client side: application.properties # A client with encrypt-sign-policy.xml set quarkus.cxf.client.helloEncryptSign.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloEncryptSign quarkus.cxf.client.helloEncryptSign.service-interface = io.quarkiverse.cxf.it.security.policy.EncryptSignPolicyHelloService quarkus.cxf.client.helloEncryptSign.features = #messageCollector # The client-endpoint-url above is HTTPS, so we have to setup the server's SSL certificates quarkus.cxf.client.helloEncryptSign.trust-store = client-truststore.USD{keystore.type} quarkus.cxf.client.helloEncryptSign.trust-store-password = client-truststore-password # Signature settings quarkus.cxf.client.helloEncryptSign.security.signature.username = alice quarkus.cxf.client.helloEncryptSign.security.signature.password = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.signature.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.helloEncryptSign.security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.client.helloEncryptSign.security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.password" = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = alice quarkus.cxf.client.helloEncryptSign.security.signature.properties."org.apache.ws.security.crypto.merlin.file" = alice-keystore.USD{keystore.type} # Encryption settings quarkus.cxf.client.helloEncryptSign.security.encryption.username = bob quarkus.cxf.client.helloEncryptSign.security.encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.helloEncryptSign.security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.client.helloEncryptSign.security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = alice quarkus.cxf.client.helloEncryptSign.security.encryption.properties."org.apache.ws.security.crypto.merlin.file" = alice-keystore.USD{keystore.type} To inspect the flow of the messages, you can execute the EncryptSignPolicyTest as follows: # Clone the repository USD git clone https://github.com/quarkiverse/quarkus-cxf.git -o upstream USD cd quarkus-cxf # Build the whole source tree USD mvn clean install -DskipTests -Dquarkus.build.skip # Run the test USD cd integration-tests/ws-security-policy USD mvn clean test -Dtest=EncryptSignPolicyTest You should see some messages containing Signature elements and encrypted bodies in the console output. 5.4.4. Configuration Configuration property fixed at build time. All other configuration properties are overridable at runtime. Configuration property Type Default quarkus.cxf.client."client-name".security.username string The user's name. It is used as follows: As the name in the UsernameToken for WS-Security As the alias name in the keystore to get the user's cert and private key for signature if signature.username is not set As the alias name in the keystore to get the user's public key for encryption if encryption.username is not set Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_USERNAME Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.password string The user's password when a callback-handler is not defined. This is only used for the password in a WS-Security UsernameToken. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_PASSWORD Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.signature.username string The user's name for signature. It is used as the alias name in the keystore to get the user's cert and private key for signature. If this is not defined, then username is used instead. If that is also not specified, it uses the the default alias set in the properties file referenced by signature.properties . If that's also not set, and the keystore only contains a single key, that key will be used. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SIGNATURE_USERNAME Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.signature.password string The user's password for signature when a callback-handler is not defined. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SIGNATURE_PASSWORD Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.encryption.username string The user's name for encryption. It is used as the alias name in the keystore to get the user's public key for encryption. If this is not defined, then username is used instead. If that is also not specified, it uses the the default alias set in the properties file referenced by encrypt.properties . If that's also not set, and the keystore only contains a single key, that key will be used. For the WS-Security web service provider, the useReqSigCert value can be used to accept (encrypt to) any client whose public key is in the service's truststore (defined in encrypt.properties ). Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENCRYPTION_USERNAME Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.callback-handler string A reference to a javax.security.auth.callback.CallbackHandler bean used to obtain passwords, for both outbound and inbound requests. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CALLBACK_HANDLER Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.saml-callback-handler string A reference to a javax.security.auth.callback.CallbackHandler implementation used to construct SAML Assertions. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SAML_CALLBACK_HANDLER Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.signature.properties Map<String,String> The Crypto property configuration to use for signing, if signature.crypto is not set. Example [prefix].signature.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin [prefix].signature.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password [prefix].signature.properties."org.apache.ws.security.crypto.merlin.file" = certs/alice.jks Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SIGNATURE_PROPERTIES Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.encryption.properties Map<String,String> The Crypto property configuration to use for encryption, if encryption.crypto is not set. Example [prefix].encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password [prefix].encryption.properties."org.apache.ws.security.crypto.merlin.file" = certs/alice.jks Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENCRYPTION_PROPERTIES Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.signature.crypto string A reference to a org.apache.wss4j.common.crypto.Crypto bean to be used for signature. If not set, signature.properties will be used to configure a Crypto instance. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SIGNATURE_CRYPTO Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.encryption.crypto string A reference to a org.apache.wss4j.common.crypto.Crypto to be used for encryption. If not set, encryption.properties will be used to configure a Crypto instance. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENCRYPTION_CRYPTO Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.encryption.certificate string A message property for prepared X509 certificate to be used for encryption. If this is not defined, then the certificate will be either loaded from the keystore encryption.properties or extracted from request (when WS-Security is used and if encryption.username has value useReqSigCert . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENCRYPTION_CERTIFICATE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.enable-revocation boolean false If true , Certificate Revocation List (CRL) checking is enabled when verifying trust in a certificate; otherwise it is not enabled. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENABLE_REVOCATION Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.enable-unsigned-saml-assertion-principal boolean false If true , unsigned SAML assertions will be allowed as SecurityContext Principals; otherwise they won't be allowed as SecurityContext Principals. Signature The label "unsigned" refers to an internal signature. Even if the token is signed by an external signature (as per the "sender-vouches" requirement), this boolean must still be configured if you want to use the token to set up the security context. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENABLE_UNSIGNED_SAML_ASSERTION_PRINCIPAL Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.validate-saml-subject-confirmation boolean true If true , the SubjectConfirmation requirements of a received SAML Token (sender-vouches or holder-of-key) will be validated; otherwise they won't be validated. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_VALIDATE_SAML_SUBJECT_CONFIRMATION Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.sc-from-jaas-subject boolean true If true , security context can be created from JAAS Subject; otherwise it must not be created from JAAS Subject. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SC_FROM_JAAS_SUBJECT Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.audience-restriction-validation boolean true If true , then if the SAML Token contains Audience Restriction URIs, one of them must match one of the values in audience.restrictions ; otherwise the SAML AudienceRestriction validation is disabled. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_AUDIENCE_RESTRICTION_VALIDATION Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.saml-role-attributename string http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role The attribute URI of the SAML AttributeStatement where the role information is stored. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SAML_ROLE_ATTRIBUTENAME Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.subject-cert-constraints string A String of regular expressions (separated by the value specified in security.cert.constraints.separator ) which will be applied to the subject DN of the certificate used for signature validation, after trust verification of the certificate chain associated with the certificate. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SUBJECT_CERT_CONSTRAINTS Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.cert-constraints-separator string , The separator that is used to parse certificate constraints configured in security.subject.cert.constraints This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CERT_CONSTRAINTS_SEPARATOR Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.actor string The actor or role name of the wsse:Security header. If this parameter is omitted, the actor name is not set. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ACTOR Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.validate.token boolean true If true , the password of a received UsernameToken will be validated; otherwise it won't be validated. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_VALIDATE_TOKEN Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.username-token.always.encrypted boolean true Whether to always encrypt UsernameTokens that are defined as a SupportingToken . This should not be set to false in a production environment, as it exposes the password (or the digest of the password) on the wire. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_USERNAME_TOKEN_ALWAYS_ENCRYPTED Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.is-bsp-compliant boolean true If true , the compliance with the Basic Security Profile (BSP) 1.1 will be ensured; otherwise it will not be ensured. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_IS_BSP_COMPLIANT Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.enable.nonce.cache boolean If true , the UsernameToken nonces will be cached for both message initiators and recipients; otherwise they won't be cached for neither message initiators nor recipients. The default is true for message recipients, and false for message initiators. Caching Caching only applies when either a UsernameToken WS-SecurityPolicy is in effect, or the UsernameToken action has been configured for the non-security-policy case. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENABLE_NONCE_CACHE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.enable.timestamp.cache boolean If true , the Timestamp Created Strings (these are only cached in conjunction with a message Signature) will be cached for both message initiators and recipients; otherwise they won't be cached for neither message initiators nor recipients. The default is true for message recipients, and false for message initiators. Caching Caching only applies when either a IncludeTimestamp policy is in effect, or the Timestamp action has been configured for the non-security-policy case. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENABLE_TIMESTAMP_CACHE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.enable.streaming boolean false If true , the new streaming (StAX) implementation of WS-Security is used; otherwise the old DOM implementation is used. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENABLE_STREAMING Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.return.security.error boolean false If true , detailed security error messages are sent to clients; otherwise the details are omitted and only a generic error message is sent. The "real" security errors should not be returned to the client in production, as they may leak information about the deployment, or otherwise provide an "oracle" for attacks. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_RETURN_SECURITY_ERROR Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.must-understand boolean true If true , the SOAP mustUnderstand header is included in security headers based on a WS-SecurityPolicy; otherwise the header is always omitted. Works only with enable.streaming = true - see CXF-8940 Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_MUST_UNDERSTAND Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.enable.saml.cache boolean If true and in case the token contains a OneTimeUse Condition, the SAML2 Token Identifiers will be cached for both message initiators and recipients; otherwise they won't be cached for neither message initiators nor recipients. The default is true for message recipients, and false for message initiators. Caching only applies when either a SamlToken policy is in effect, or a SAML action has been configured for the non-security-policy case. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ENABLE_SAML_CACHE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.store.bytes.in.attachment boolean Whether to store bytes (CipherData or BinarySecurityToken) in an attachment. The default is true if MTOM is enabled. Set it to false to BASE-64 encode the bytes and "inlined" them in the message instead. Setting this to true is more efficient, as it means that the BASE-64 encoding step can be skipped. This only applies to the DOM WS-Security stack. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STORE_BYTES_IN_ATTACHMENT Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.swa.encryption.attachment.transform.content boolean false If true , Attachment-Content-Only transform will be used when an Attachment is encrypted via a WS-SecurityPolicy expression; otherwise Attachment-Complete transform will be used when an Attachment is encrypted via a WS-SecurityPolicy expression. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SWA_ENCRYPTION_ATTACHMENT_TRANSFORM_CONTENT Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.use.str.transform boolean true If true , the STR (Security Token Reference) Transform will be used when (externally) signing a SAML Token; otherwise the STR (Security Token Reference) Transform will not be used. Some frameworks cannot process the SecurityTokenReference . You may set this false in such cases. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_USE_STR_TRANSFORM Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.add.inclusive.prefixes boolean true If true , an InclusiveNamespaces PrefixList will be added as a CanonicalizationMethod child when generating Signatures using WSConstants.C14N_EXCL_OMIT_COMMENTS ; otherwise the PrefixList will not be added. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ADD_INCLUSIVE_PREFIXES Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.disable.require.client.cert.check boolean false If true , the enforcement of the WS-SecurityPolicy RequireClientCertificate policy will be disabled; otherwise the enforcement of the WS-SecurityPolicy RequireClientCertificate policy is enabled. Some servers may not do client certificate verification at the start of the SSL handshake, and therefore the client certificates may not be available to the WS-Security layer for policy verification. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_DISABLE_REQUIRE_CLIENT_CERT_CHECK Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.expand.xop.include boolean If true , the xop:Include elements will be searched for encryption and signature (on the outbound side) or for signature verification (on the inbound side); otherwise the search won't happen. This ensures that the actual bytes are signed, and not just the reference. The default is true if MTOM is enabled, otherwise the default is false . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_EXPAND_XOP_INCLUDE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.timestamp.timeToLive string 300 The time in seconds to add to the Creation value of an incoming Timestamp to determine whether to accept it as valid or not. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_TIMESTAMP_TIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.timestamp.futureTimeToLive string 60 The time in seconds in the future within which the Created time of an incoming Timestamp is valid. The default is greater than zero to avoid problems where clocks are slightly askew. Set this to 0 to reject all future-created `Timestamp`s. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_TIMESTAMP_FUTURETIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.usernametoken.timeToLive string 300 The time in seconds to append to the Creation value of an incoming UsernameToken to determine whether to accept it as valid or not. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_USERNAMETOKEN_TIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.usernametoken.futureTimeToLive string 60 The time in seconds in the future within which the Created time of an incoming UsernameToken is valid. The default is greater than zero to avoid problems where clocks are slightly askew. Set this to 0 to reject all future-created `UsernameToken`s. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_USERNAMETOKEN_FUTURETIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.spnego.client.action string A reference to a org.apache.wss4j.common.spnego.SpnegoClientAction bean to use for SPNEGO. This allows the user to plug in a different implementation to obtain a service ticket. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SPNEGO_CLIENT_ACTION Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.nonce.cache.instance string A reference to a org.apache.wss4j.common.cache.ReplayCache bean used to cache UsernameToken nonces. A org.apache.wss4j.common.cache.EHCacheReplayCache instance is used by default. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_NONCE_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.timestamp.cache.instance string A reference to a org.apache.wss4j.common.cache.ReplayCache bean used to cache Timestamp Created Strings. A org.apache.wss4j.common.cache.EHCacheReplayCache instance is used by default. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_TIMESTAMP_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.saml.cache.instance string A reference to a org.apache.wss4j.common.cache.ReplayCache bean used to cache SAML2 Token Identifier Strings (if the token contains a OneTimeUse condition). A org.apache.wss4j.common.cache.EHCacheReplayCache instance is used by default. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SAML_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.cache.config.file string Set this property to point to a configuration file for the underlying caching implementation for the TokenStore . The default configuration file that is used is cxf-ehcache.xml in org.apache.cxf:cxf-rt-security JAR. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CACHE_CONFIG_FILE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.token-store-cache-instance string A reference to a org.apache.cxf.ws.security.tokenstore.TokenStore bean to use for caching security tokens. By default this uses a instance. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_TOKEN_STORE_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.cache.identifier string The Cache Identifier to use with the TokenStore. CXF uses the following key to retrieve a token store: org.apache.cxf.ws.security.tokenstore.TokenStore-<identifier> . This key can be used to configure service-specific cache configuration. If the identifier does not match, then it falls back to a cache configuration with key org.apache.cxf.ws.security.tokenstore.TokenStore . The default <identifier> is the QName of the service in question. However to pick up a custom cache configuration (for example, if you want to specify a TokenStore per-client proxy), it can be configured with this identifier instead. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CACHE_IDENTIFIER Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.role.classifier string The Subject Role Classifier to use. If one of the WSS4J Validators returns a JAAS Subject from Validation, then the WSS4JInInterceptor will attempt to create a SecurityContext based on this Subject. If this value is not specified, then it tries to get roles using the DefaultSecurityContext in org.apache.cxf:cxf-core . Otherwise it uses this value in combination with the role.classifier.type to get the roles from the Subject . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ROLE_CLASSIFIER Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.role.classifier.type string prefix The Subject Role Classifier Type to use. If one of the WSS4J Validators returns a JAAS Subject from Validation, then the WSS4JInInterceptor will attempt to create a SecurityContext based on this Subject. Currently accepted values are prefix or classname . Must be used in conjunction with the role.classifier . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ROLE_CLASSIFIER_TYPE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.asymmetric.signature.algorithm string This configuration tag allows the user to override the default Asymmetric Signature algorithm (RSA-SHA1) for use in WS-SecurityPolicy, as the WS-SecurityPolicy specification does not allow the use of other algorithms at present. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_ASYMMETRIC_SIGNATURE_ALGORITHM Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.symmetric.signature.algorithm string This configuration tag allows the user to override the default Symmetric Signature algorithm (HMAC-SHA1) for use in WS-SecurityPolicy, as the WS-SecurityPolicy specification does not allow the use of other algorithms at present. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SYMMETRIC_SIGNATURE_ALGORITHM Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.password.encryptor.instance string A reference to a org.apache.wss4j.common.crypto.PasswordEncryptor bean, which is used to encrypt or decrypt passwords in the Merlin Crypto implementation (or any custom Crypto implementations). By default, WSS4J uses the org.apache.wss4j.common.crypto.JasyptPasswordEncryptor which must be instantiated with a password to use to decrypt keystore passwords in the Merlin Crypto definition. This password is obtained via the CallbackHandler defined via callback-handler The encrypted passwords must be stored in the format "ENC(encoded encrypted password)". This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_PASSWORD_ENCRYPTOR_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.delegated.credential string A reference to a Kerberos org.ietf.jgss.GSSCredential bean to use for WS-Security. This is used to retrieve a service ticket instead of using the client credentials. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_DELEGATED_CREDENTIAL Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.security.context.creator string A reference to a org.apache.cxf.ws.security.wss4j.WSS4JSecurityContextCreator bean that is used to create a CXF SecurityContext from the set of WSS4J processing results. The default implementation is org.apache.cxf.ws.security.wss4j.DefaultWSS4JSecurityContextCreator . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SECURITY_CONTEXT_CREATOR Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.security.token.lifetime long 300000 The security token lifetime value (in milliseconds). This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_SECURITY_TOKEN_LIFETIME Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.kerberos.request.credential.delegation boolean false If true , credential delegation is requested in the KerberosClient; otherwise the credential delegation is not in the KerberosClient. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_KERBEROS_REQUEST_CREDENTIAL_DELEGATION Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.kerberos.use.credential.delegation boolean false If true , GSSCredential bean is retrieved from the Message Context using the delegated.credential property and then it is used to obtain a service ticket. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_KERBEROS_USE_CREDENTIAL_DELEGATION Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.kerberos.is.username.in.servicename.form boolean false If true , the Kerberos username is in servicename form; otherwise the Kerberos username is not in servicename form. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_KERBEROS_IS_USERNAME_IN_SERVICENAME_FORM Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.kerberos.jaas.context string The JAAS Context name to use for Kerberos. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_KERBEROS_JAAS_CONTEXT Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.kerberos.spn string The Kerberos Service Provider Name (spn) to use. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_KERBEROS_SPN Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.kerberos.client string A reference to a org.apache.cxf.ws.security.kerberos.KerberosClient bean used to obtain a service ticket. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_KERBEROS_CLIENT Since Quarkus CXF : 2.5.0 quarkus.cxf.client."client-name".security.custom.digest.algorithm string http://www.w3.org/2001/04/xmlenc#sha256 The Digest Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite , for instance <wsp:Policy wsu:Id="SecurityServiceEncryptThenSignPolicy" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702"> <wsp:ExactlyOne> <wsp:All> <sp:AsymmetricBinding xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702"> <wsp:Policy> ... <sp:AlgorithmSuite> <wsp:Policy> <sp:CustomAlgorithmSuite/> </wsp:Policy> </sp:AlgorithmSuite> ... </wsp:Policy> </sp:AsymmetricBinding> ... </wsp:All> </wsp:ExactlyOne> </wsp:Policy> For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_DIGEST_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.encryption.algorithm string http://www.w3.org/2009/xmlenc11#aes256-gcm The Encryption Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_ENCRYPTION_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.symmetric.key.encryption.algorithm string http://www.w3.org/2001/04/xmlenc#kw-aes256 The Symmetric Key Encryption Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_SYMMETRIC_KEY_ENCRYPTION_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.asymmetric.key.encryption.algorithm string http://www.w3.org/2001/04/xmlenc#rsa-1_5 The Asymmetric Key Encryption Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_ASYMMETRIC_KEY_ENCRYPTION_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.encryption.key.derivation string http://schemas.xmlsoap.org/ws/2005/02/sc/dk/p_sha1 The Encryption Key Derivation to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_ENCRYPTION_KEY_DERIVATION Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.signature.key.derivation string http://schemas.xmlsoap.org/ws/2005/02/sc/dk/p_sha1 The Signature Key Derivation to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_SIGNATURE_KEY_DERIVATION Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.encryption.derived.key.length int 256 The Encryption Derived Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_ENCRYPTION_DERIVED_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.signature.derived.key.length int 192 The Signature Derived Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_SIGNATURE_DERIVED_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.minimum.symmetric.key.length int 256 The Minimum Symmetric Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_MINIMUM_SYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.maximum.symmetric.key.length int 256 The Maximum Symmetric Key Length to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_MAXIMUM_SYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.minimum.asymmetric.key.length int 1024 The Minimum Symmetric Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_MINIMUM_ASYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.custom.maximum.asymmetric.key.length int 4096 The Maximum Symmetric Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_CUSTOM_MAXIMUM_ASYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.client."client-name".security.sts.client string A reference to a fully configured org.apache.cxf.ws.security.trust.STSClient bean to communicate with the STS. If not set, the STS client will be created and configured based on other [prefix].security.sts.client.* properties as long as they are available. To work around the fact that org.apache.cxf.ws.security.trust.STSClient does not have a no-args constructor and cannot thus be used as a CDI bean type, you can use the wrapper class io.quarkiverse.cxf.ws.security.sts.client.STSClientBean instead. Tip: Check the Security Token Service (STS) extension page for more information about WS-Trust. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.wsdl string A URL, resource path or local filesystem path pointing to a WSDL document to use when generating the service proxy of the STS client. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_WSDL Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.service-name string A fully qualified name of the STS service. Common values include: WS-Trust 1.0: {http://schemas.xmlsoap.org/ws/2005/02/trust/}SecurityTokenService WS-Trust 1.3: {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}SecurityTokenService WS-Trust 1.4: {http://docs.oasis-open.org/ws-sx/ws-trust/200802/}SecurityTokenService Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_SERVICE_NAME Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.endpoint-name string A fully qualified name of the STS endpoint name. Common values include: {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}X509_Port {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}Transport_Port {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}UT_Port Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_ENDPOINT_NAME Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.username string The user name to use when authenticating against the STS. It is used as follows: As the name in the UsernameToken for WS-Security As the alias name in the keystore to get the user's cert and private key for signature if signature.username is not set As the alias name in the keystore to get the user's public key for encryption if encryption.username is not set Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_USERNAME Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.password string The password associated with the username . Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_PASSWORD Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.encryption.username string The user's name for encryption. It is used as the alias name in the keystore to get the user's public key for encryption. If this is not defined, then username is used instead. If that is also not specified, it uses the the default alias set in the properties file referenced by encrypt.properties . If that's also not set, and the keystore only contains a single key, that key will be used. For the WS-Security web service provider, the useReqSigCert value can be used to accept (encrypt to) any client whose public key is in the service's truststore (defined in encrypt.properties ). Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_ENCRYPTION_USERNAME Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.encryption.properties Map<String,String> The Crypto property configuration to use for encryption, if encryption.crypto is not set. Example [prefix].encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password [prefix].encryption.properties."org.apache.ws.security.crypto.merlin.file" = certs/alice.jks Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_ENCRYPTION_PROPERTIES Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.encryption.crypto string A reference to a org.apache.wss4j.common.crypto.Crypto to be used for encryption. If not set, encryption.properties will be used to configure a Crypto instance. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_ENCRYPTION_CRYPTO Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.token.crypto string A reference to a org.apache.wss4j.common.crypto.Crypto to be used for the STS. If not set, token.properties will be used to configure a Crypto instance. WCF's trust server sometimes will encrypt the token in the response IN ADDITION TO the full security on the message. These properties control the way the STS client will decrypt the EncryptedData elements in the response. These are also used by the token.properties to send/process any RSA/DSAKeyValue tokens used if the KeyType is PublicKey Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_TOKEN_CRYPTO Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.token.properties Map<String,String> The Crypto property configuration to use for encryption, if encryption.crypto is not set. Example [prefix].token.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin [prefix].token.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password [prefix].token.properties."org.apache.ws.security.crypto.merlin.file" = certs/alice.jks Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_TOKEN_PROPERTIES Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.token.username string The alias name in the keystore to get the user's public key to send to the STS for the PublicKey KeyType case. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_TOKEN_USERNAME Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.token.usecert boolean false Whether to write out an X509Certificate structure in UseKey/KeyInfo, or whether to write out a KeyValue structure. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_TOKEN_USECERT Since Quarkus CXF : 3.8.0 quarkus.cxf.client."client-name".security.sts.client.soap12-binding boolean false If true the STS client will be set to send Soap 1.2 messages; otherwise it will send SOAP 1.1 messages. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__SECURITY_STS_CLIENT_SOAP12_BINDING Since Quarkus CXF : 3.8.0 quarkus.cxf.endpoint."/endpoint-path".security.username string The user's name. It is used as follows: As the name in the UsernameToken for WS-Security As the alias name in the keystore to get the user's cert and private key for signature if signature.username is not set As the alias name in the keystore to get the user's public key for encryption if encryption.username is not set Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_USERNAME Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.password string The user's password when a callback-handler is not defined. This is only used for the password in a WS-Security UsernameToken. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_PASSWORD Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.signature.username string The user's name for signature. It is used as the alias name in the keystore to get the user's cert and private key for signature. If this is not defined, then username is used instead. If that is also not specified, it uses the the default alias set in the properties file referenced by signature.properties . If that's also not set, and the keystore only contains a single key, that key will be used. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SIGNATURE_USERNAME Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.signature.password string The user's password for signature when a callback-handler is not defined. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SIGNATURE_PASSWORD Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.encryption.username string The user's name for encryption. It is used as the alias name in the keystore to get the user's public key for encryption. If this is not defined, then username is used instead. If that is also not specified, it uses the the default alias set in the properties file referenced by encrypt.properties . If that's also not set, and the keystore only contains a single key, that key will be used. For the WS-Security web service provider, the useReqSigCert value can be used to accept (encrypt to) any client whose public key is in the service's truststore (defined in encrypt.properties ). Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENCRYPTION_USERNAME Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.callback-handler string A reference to a javax.security.auth.callback.CallbackHandler bean used to obtain passwords, for both outbound and inbound requests. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CALLBACK_HANDLER Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.saml-callback-handler string A reference to a javax.security.auth.callback.CallbackHandler implementation used to construct SAML Assertions. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SAML_CALLBACK_HANDLER Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.signature.properties Map<String,String> The Crypto property configuration to use for signing, if signature.crypto is not set. Example [prefix].signature.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin [prefix].signature.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password [prefix].signature.properties."org.apache.ws.security.crypto.merlin.file" = certs/alice.jks Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SIGNATURE_PROPERTIES Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.encryption.properties Map<String,String> The Crypto property configuration to use for encryption, if encryption.crypto is not set. Example [prefix].encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password [prefix].encryption.properties."org.apache.ws.security.crypto.merlin.file" = certs/alice.jks Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENCRYPTION_PROPERTIES Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.signature.crypto string A reference to a org.apache.wss4j.common.crypto.Crypto bean to be used for signature. If not set, signature.properties will be used to configure a Crypto instance. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SIGNATURE_CRYPTO Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.encryption.crypto string A reference to a org.apache.wss4j.common.crypto.Crypto to be used for encryption. If not set, encryption.properties will be used to configure a Crypto instance. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENCRYPTION_CRYPTO Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.encryption.certificate string A message property for prepared X509 certificate to be used for encryption. If this is not defined, then the certificate will be either loaded from the keystore encryption.properties or extracted from request (when WS-Security is used and if encryption.username has value useReqSigCert . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENCRYPTION_CERTIFICATE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.enable-revocation boolean false If true , Certificate Revocation List (CRL) checking is enabled when verifying trust in a certificate; otherwise it is not enabled. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENABLE_REVOCATION Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.enable-unsigned-saml-assertion-principal boolean false If true , unsigned SAML assertions will be allowed as SecurityContext Principals; otherwise they won't be allowed as SecurityContext Principals. Signature The label "unsigned" refers to an internal signature. Even if the token is signed by an external signature (as per the "sender-vouches" requirement), this boolean must still be configured if you want to use the token to set up the security context. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENABLE_UNSIGNED_SAML_ASSERTION_PRINCIPAL Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.validate-saml-subject-confirmation boolean true If true , the SubjectConfirmation requirements of a received SAML Token (sender-vouches or holder-of-key) will be validated; otherwise they won't be validated. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_VALIDATE_SAML_SUBJECT_CONFIRMATION Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.sc-from-jaas-subject boolean true If true , security context can be created from JAAS Subject; otherwise it must not be created from JAAS Subject. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SC_FROM_JAAS_SUBJECT Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.audience-restriction-validation boolean true If true , then if the SAML Token contains Audience Restriction URIs, one of them must match one of the values in audience.restrictions ; otherwise the SAML AudienceRestriction validation is disabled. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_AUDIENCE_RESTRICTION_VALIDATION Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.saml-role-attributename string http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role The attribute URI of the SAML AttributeStatement where the role information is stored. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SAML_ROLE_ATTRIBUTENAME Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.subject-cert-constraints string A String of regular expressions (separated by the value specified in security.cert.constraints.separator ) which will be applied to the subject DN of the certificate used for signature validation, after trust verification of the certificate chain associated with the certificate. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SUBJECT_CERT_CONSTRAINTS Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.cert-constraints-separator string , The separator that is used to parse certificate constraints configured in security.subject.cert.constraints This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CERT_CONSTRAINTS_SEPARATOR Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.actor string The actor or role name of the wsse:Security header. If this parameter is omitted, the actor name is not set. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ACTOR Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.validate.token boolean true If true , the password of a received UsernameToken will be validated; otherwise it won't be validated. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_VALIDATE_TOKEN Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.username-token.always.encrypted boolean true Whether to always encrypt UsernameTokens that are defined as a SupportingToken . This should not be set to false in a production environment, as it exposes the password (or the digest of the password) on the wire. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_USERNAME_TOKEN_ALWAYS_ENCRYPTED Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.is-bsp-compliant boolean true If true , the compliance with the Basic Security Profile (BSP) 1.1 will be ensured; otherwise it will not be ensured. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_IS_BSP_COMPLIANT Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.enable.nonce.cache boolean If true , the UsernameToken nonces will be cached for both message initiators and recipients; otherwise they won't be cached for neither message initiators nor recipients. The default is true for message recipients, and false for message initiators. Caching Caching only applies when either a UsernameToken WS-SecurityPolicy is in effect, or the UsernameToken action has been configured for the non-security-policy case. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENABLE_NONCE_CACHE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.enable.timestamp.cache boolean If true , the Timestamp Created Strings (these are only cached in conjunction with a message Signature) will be cached for both message initiators and recipients; otherwise they won't be cached for neither message initiators nor recipients. The default is true for message recipients, and false for message initiators. Caching Caching only applies when either a IncludeTimestamp policy is in effect, or the Timestamp action has been configured for the non-security-policy case. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENABLE_TIMESTAMP_CACHE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.enable.streaming boolean false If true , the new streaming (StAX) implementation of WS-Security is used; otherwise the old DOM implementation is used. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENABLE_STREAMING Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.return.security.error boolean false If true , detailed security error messages are sent to clients; otherwise the details are omitted and only a generic error message is sent. The "real" security errors should not be returned to the client in production, as they may leak information about the deployment, or otherwise provide an "oracle" for attacks. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_RETURN_SECURITY_ERROR Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.must-understand boolean true If true , the SOAP mustUnderstand header is included in security headers based on a WS-SecurityPolicy; otherwise the header is always omitted. Works only with enable.streaming = true - see CXF-8940 Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_MUST_UNDERSTAND Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.enable.saml.cache boolean If true and in case the token contains a OneTimeUse Condition, the SAML2 Token Identifiers will be cached for both message initiators and recipients; otherwise they won't be cached for neither message initiators nor recipients. The default is true for message recipients, and false for message initiators. Caching only applies when either a SamlToken policy is in effect, or a SAML action has been configured for the non-security-policy case. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ENABLE_SAML_CACHE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.store.bytes.in.attachment boolean Whether to store bytes (CipherData or BinarySecurityToken) in an attachment. The default is true if MTOM is enabled. Set it to false to BASE-64 encode the bytes and "inlined" them in the message instead. Setting this to true is more efficient, as it means that the BASE-64 encoding step can be skipped. This only applies to the DOM WS-Security stack. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_STORE_BYTES_IN_ATTACHMENT Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.swa.encryption.attachment.transform.content boolean false If true , Attachment-Content-Only transform will be used when an Attachment is encrypted via a WS-SecurityPolicy expression; otherwise Attachment-Complete transform will be used when an Attachment is encrypted via a WS-SecurityPolicy expression. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SWA_ENCRYPTION_ATTACHMENT_TRANSFORM_CONTENT Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.use.str.transform boolean true If true , the STR (Security Token Reference) Transform will be used when (externally) signing a SAML Token; otherwise the STR (Security Token Reference) Transform will not be used. Some frameworks cannot process the SecurityTokenReference . You may set this false in such cases. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_USE_STR_TRANSFORM Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.add.inclusive.prefixes boolean true If true , an InclusiveNamespaces PrefixList will be added as a CanonicalizationMethod child when generating Signatures using WSConstants.C14N_EXCL_OMIT_COMMENTS ; otherwise the PrefixList will not be added. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ADD_INCLUSIVE_PREFIXES Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.disable.require.client.cert.check boolean false If true , the enforcement of the WS-SecurityPolicy RequireClientCertificate policy will be disabled; otherwise the enforcement of the WS-SecurityPolicy RequireClientCertificate policy is enabled. Some servers may not do client certificate verification at the start of the SSL handshake, and therefore the client certificates may not be available to the WS-Security layer for policy verification. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_DISABLE_REQUIRE_CLIENT_CERT_CHECK Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.expand.xop.include boolean If true , the xop:Include elements will be searched for encryption and signature (on the outbound side) or for signature verification (on the inbound side); otherwise the search won't happen. This ensures that the actual bytes are signed, and not just the reference. The default is true if MTOM is enabled, otherwise the default is false . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_EXPAND_XOP_INCLUDE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.timestamp.timeToLive string 300 The time in seconds to add to the Creation value of an incoming Timestamp to determine whether to accept it as valid or not. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_TIMESTAMP_TIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.timestamp.futureTimeToLive string 60 The time in seconds in the future within which the Created time of an incoming Timestamp is valid. The default is greater than zero to avoid problems where clocks are slightly askew. Set this to 0 to reject all future-created `Timestamp`s. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_TIMESTAMP_FUTURETIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.usernametoken.timeToLive string 300 The time in seconds to append to the Creation value of an incoming UsernameToken to determine whether to accept it as valid or not. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_USERNAMETOKEN_TIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.usernametoken.futureTimeToLive string 60 The time in seconds in the future within which the Created time of an incoming UsernameToken is valid. The default is greater than zero to avoid problems where clocks are slightly askew. Set this to 0 to reject all future-created `UsernameToken`s. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_USERNAMETOKEN_FUTURETIMETOLIVE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.spnego.client.action string A reference to a org.apache.wss4j.common.spnego.SpnegoClientAction bean to use for SPNEGO. This allows the user to plug in a different implementation to obtain a service ticket. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SPNEGO_CLIENT_ACTION Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.nonce.cache.instance string A reference to a org.apache.wss4j.common.cache.ReplayCache bean used to cache UsernameToken nonces. A org.apache.wss4j.common.cache.EHCacheReplayCache instance is used by default. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_NONCE_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.timestamp.cache.instance string A reference to a org.apache.wss4j.common.cache.ReplayCache bean used to cache Timestamp Created Strings. A org.apache.wss4j.common.cache.EHCacheReplayCache instance is used by default. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_TIMESTAMP_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.saml.cache.instance string A reference to a org.apache.wss4j.common.cache.ReplayCache bean used to cache SAML2 Token Identifier Strings (if the token contains a OneTimeUse condition). A org.apache.wss4j.common.cache.EHCacheReplayCache instance is used by default. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SAML_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.cache.config.file string Set this property to point to a configuration file for the underlying caching implementation for the TokenStore . The default configuration file that is used is cxf-ehcache.xml in org.apache.cxf:cxf-rt-security JAR. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CACHE_CONFIG_FILE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.token-store-cache-instance string A reference to a org.apache.cxf.ws.security.tokenstore.TokenStore bean to use for caching security tokens. By default this uses a instance. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_TOKEN_STORE_CACHE_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.cache.identifier string The Cache Identifier to use with the TokenStore. CXF uses the following key to retrieve a token store: org.apache.cxf.ws.security.tokenstore.TokenStore-<identifier> . This key can be used to configure service-specific cache configuration. If the identifier does not match, then it falls back to a cache configuration with key org.apache.cxf.ws.security.tokenstore.TokenStore . The default <identifier> is the QName of the service in question. However to pick up a custom cache configuration (for example, if you want to specify a TokenStore per-client proxy), it can be configured with this identifier instead. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CACHE_IDENTIFIER Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.role.classifier string The Subject Role Classifier to use. If one of the WSS4J Validators returns a JAAS Subject from Validation, then the WSS4JInInterceptor will attempt to create a SecurityContext based on this Subject. If this value is not specified, then it tries to get roles using the DefaultSecurityContext in org.apache.cxf:cxf-core . Otherwise it uses this value in combination with the role.classifier.type to get the roles from the Subject . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ROLE_CLASSIFIER Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.role.classifier.type string prefix The Subject Role Classifier Type to use. If one of the WSS4J Validators returns a JAAS Subject from Validation, then the WSS4JInInterceptor will attempt to create a SecurityContext based on this Subject. Currently accepted values are prefix or classname . Must be used in conjunction with the role.classifier . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ROLE_CLASSIFIER_TYPE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.asymmetric.signature.algorithm string This configuration tag allows the user to override the default Asymmetric Signature algorithm (RSA-SHA1) for use in WS-SecurityPolicy, as the WS-SecurityPolicy specification does not allow the use of other algorithms at present. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_ASYMMETRIC_SIGNATURE_ALGORITHM Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.symmetric.signature.algorithm string This configuration tag allows the user to override the default Symmetric Signature algorithm (HMAC-SHA1) for use in WS-SecurityPolicy, as the WS-SecurityPolicy specification does not allow the use of other algorithms at present. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SYMMETRIC_SIGNATURE_ALGORITHM Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.password.encryptor.instance string A reference to a org.apache.wss4j.common.crypto.PasswordEncryptor bean, which is used to encrypt or decrypt passwords in the Merlin Crypto implementation (or any custom Crypto implementations). By default, WSS4J uses the org.apache.wss4j.common.crypto.JasyptPasswordEncryptor which must be instantiated with a password to use to decrypt keystore passwords in the Merlin Crypto definition. This password is obtained via the CallbackHandler defined via callback-handler The encrypted passwords must be stored in the format "ENC(encoded encrypted password)". This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_PASSWORD_ENCRYPTOR_INSTANCE Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.delegated.credential string A reference to a Kerberos org.ietf.jgss.GSSCredential bean to use for WS-Security. This is used to retrieve a service ticket instead of using the client credentials. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_DELEGATED_CREDENTIAL Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.security.context.creator string A reference to a org.apache.cxf.ws.security.wss4j.WSS4JSecurityContextCreator bean that is used to create a CXF SecurityContext from the set of WSS4J processing results. The default implementation is org.apache.cxf.ws.security.wss4j.DefaultWSS4JSecurityContextCreator . This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SECURITY_CONTEXT_CREATOR Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.security.token.lifetime long 300000 The security token lifetime value (in milliseconds). This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_SECURITY_TOKEN_LIFETIME Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.kerberos.request.credential.delegation boolean false If true , credential delegation is requested in the KerberosClient; otherwise the credential delegation is not in the KerberosClient. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_KERBEROS_REQUEST_CREDENTIAL_DELEGATION Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.kerberos.use.credential.delegation boolean false If true , GSSCredential bean is retrieved from the Message Context using the delegated.credential property and then it is used to obtain a service ticket. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_KERBEROS_USE_CREDENTIAL_DELEGATION Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.kerberos.is.username.in.servicename.form boolean false If true , the Kerberos username is in servicename form; otherwise the Kerberos username is not in servicename form. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_KERBEROS_IS_USERNAME_IN_SERVICENAME_FORM Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.kerberos.jaas.context string The JAAS Context name to use for Kerberos. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_KERBEROS_JAAS_CONTEXT Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.kerberos.spn string The Kerberos Service Provider Name (spn) to use. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_KERBEROS_SPN Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.kerberos.client string A reference to a org.apache.cxf.ws.security.kerberos.KerberosClient bean used to obtain a service ticket. This option is experimental, because it is not covered by tests yet. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_KERBEROS_CLIENT Since Quarkus CXF : 2.5.0 quarkus.cxf.endpoint."/endpoint-path".security.custom.digest.algorithm string http://www.w3.org/2001/04/xmlenc#sha256 The Digest Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite , for instance <wsp:Policy wsu:Id="SecurityServiceEncryptThenSignPolicy" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702"> <wsp:ExactlyOne> <wsp:All> <sp:AsymmetricBinding xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702"> <wsp:Policy> ... <sp:AlgorithmSuite> <wsp:Policy> <sp:CustomAlgorithmSuite/> </wsp:Policy> </sp:AlgorithmSuite> ... </wsp:Policy> </sp:AsymmetricBinding> ... </wsp:All> </wsp:ExactlyOne> </wsp:Policy> For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_DIGEST_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.encryption.algorithm string http://www.w3.org/2009/xmlenc11#aes256-gcm The Encryption Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_ENCRYPTION_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.symmetric.key.encryption.algorithm string http://www.w3.org/2001/04/xmlenc#kw-aes256 The Symmetric Key Encryption Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_SYMMETRIC_KEY_ENCRYPTION_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.asymmetric.key.encryption.algorithm string http://www.w3.org/2001/04/xmlenc#rsa-1_5 The Asymmetric Key Encryption Algorithm to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_ASYMMETRIC_KEY_ENCRYPTION_ALGORITHM Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.encryption.key.derivation string http://schemas.xmlsoap.org/ws/2005/02/sc/dk/p_sha1 The Encryption Key Derivation to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_ENCRYPTION_KEY_DERIVATION Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.signature.key.derivation string http://schemas.xmlsoap.org/ws/2005/02/sc/dk/p_sha1 The Signature Key Derivation to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_SIGNATURE_KEY_DERIVATION Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.encryption.derived.key.length int 256 The Encryption Derived Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_ENCRYPTION_DERIVED_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.signature.derived.key.length int 192 The Signature Derived Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_SIGNATURE_DERIVED_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.minimum.symmetric.key.length int 256 The Minimum Symmetric Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_MINIMUM_SYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.maximum.symmetric.key.length int 256 The Maximum Symmetric Key Length to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_MAXIMUM_SYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.minimum.asymmetric.key.length int 1024 The Minimum Symmetric Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_MINIMUM_ASYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 quarkus.cxf.endpoint."/endpoint-path".security.custom.maximum.asymmetric.key.length int 4096 The Maximum Symmetric Key Length (number of bits) to set on the org.apache.wss4j.policy.model.AlgorithmSuite.AlgorithmSuiteType . This value is only taken into account if the current security policy has set CustomAlgorithmSuite as an AlgorithmSuite For more information about algorithms, see WS-SecurityPolicy 1.2 specification and the Algorithms section of XML Encryption Syntax and Processing Specification. CustomAlgorithmSuite and the *.security.custom.* family of options were introduced to make it possible to run CXF SOAP clients and services on systems with FIPS assertions enabled. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__SECURITY_CUSTOM_MAXIMUM_ASYMMETRIC_KEY_LENGTH Since Quarkus CXF : 3.8.1 5.5. WS-ReliableMessaging WS-ReliableMessaging (WS-RM) is a protocol ensuring a reliable delivery of messages in a distributed environment even in presence of software, system, or network failures. This extension provides CXF framework's WS-ReliableMessaging implementation. 5.5.1. Maven coordinates Create a new project using quarkus-cxf-rt-ws-rm on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-ws-rm</artifactId> </dependency> 5.5.2. Supported standards WS-ReliableMessaging 5.5.3. Usage Once your application depends on quarkus-cxf-rt-ws-rm , WS-RM is enabled for all clients and service endpoints defined in application.properties . This is due to the fact that the quarkus.cxf.client."client-name".rm.enabled and quarkus.cxf.endpoint."/endpoint-path".rm.enabled properties are true by default. Enabling WS-RM for a client or service endpoints means that WS-RM interceptors will be added to the given client or endpoint. In addition to that you may want to set some of the options documented below and/or the following WS-Addressing options: quarkus.cxf.client."client-name".decoupled-endpoint quarkus.cxf.decoupled-endpoint-base 5.5.3.1. Runnable example There is an integration test covering WS-RM with a decoupled endpoint in the Quarkus CXF source tree. It is split into two separate applications that communicate with each other: Server Client To run it, you need to install the server into your local Maven repository first USD cd test-util-parent/test-ws-rm-server-jvm USD mvn clean install And then you can run the {link-quarkus-cxf-source-tree-base}/integration-tests/ws-rm-client/src/test/java/io/quarkiverse/cxf/it/ws/rm/client/WsReliableMessagingTest.java#L28[test scenario] implemented in the client module: USD cd ../../integration-tests/ws-rm-client USD mvn clean test You should see the exchange of SOAP messages between the client, the server and the decoupled endpoint in the console. 5.5.4. Configuration Configuration property fixed at build time. All other configuration properties are overridable at runtime. Configuration property Type Default quarkus.cxf.rm.namespace string http://schemas.xmlsoap.org/ws/2005/02/rm WS-RM version namespace: http://schemas.xmlsoap.org/ws/2005/02/rm/ or http://docs.oasis-open.org/ws-rx/wsrm/200702 Environment variable : QUARKUS_CXF_RM_NAMESPACE Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.wsa-namespace string http://schemas.xmlsoap.org/ws/2004/08/addressing WS-Addressing version namespace: http://schemas.xmlsoap.org/ws/2004/08/addressing or http://www.w3.org/2005/08/addressing . Note that this property is ignored unless you are using the http://schemas.xmlsoap.org/ws/2005/02/rm/ RM namespace. Environment variable : QUARKUS_CXF_RM_WSA_NAMESPACE Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.inactivity-timeout long A time duration in milliseconds after which the associated sequence will be closed if no messages (including acknowledgments and other control messages) were exchanged between the sender and receiver during that period of time. If not set, the associated sequence will never be closed due to inactivity. Environment variable : QUARKUS_CXF_RM_INACTIVITY_TIMEOUT Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.retransmission-interval long 3000 A time duration in milliseconds between successive attempts to resend a message that has not been acknowledged by the receiver. Environment variable : QUARKUS_CXF_RM_RETRANSMISSION_INTERVAL Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.exponential-backoff boolean false If true the retransmission interval will be doubled on every transmission attempt; otherwise the retransmission interval stays equal to quarkus.cxf.rm.retransmission-interval for every retransmission attempt. Environment variable : QUARKUS_CXF_RM_EXPONENTIAL_BACKOFF Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.acknowledgement-interval long A time duration in milliseconds within which an acknowledgement for a received message is expected to be sent by a RM destination. If not specified, the acknowledgements will be sent immediately. Environment variable : QUARKUS_CXF_RM_ACKNOWLEDGEMENT_INTERVAL Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.store string A reference to a org.apache.cxf.ws.rm.persistence.RMStore bean used to store source and destination sequences and message references. Environment variable : QUARKUS_CXF_RM_STORE Since Quarkus CXF : 2.7.0 quarkus.cxf.rm.feature-ref string #defaultRmFeature A reference to a org.apache.cxf.ws.rm.feature.RMFeature bean to set on clients and service endpoint which have quarkus.cxf.[client|service]."name".rm.enabled = true . If the value is #defaultRmFeature then Quarkus CXF creates and configures the bean for you. Environment variable : QUARKUS_CXF_RM_FEATURE_REF Since Quarkus CXF : 2.7.0 quarkus.cxf.client."client-name".rm.enabled boolean true If true then the WS-ReliableMessaging interceptors will be added to this client or service endpoint. Environment variable : QUARKUS_CXF_CLIENT__CLIENT_NAME__RM_ENABLED Since Quarkus CXF : 2.7.0 quarkus.cxf.endpoint."/endpoint-path".rm.enabled boolean true If true then the WS-ReliableMessaging interceptors will be added to this client or service endpoint. Environment variable : QUARKUS_CXF_ENDPOINT___ENDPOINT_PATH__RM_ENABLED Since Quarkus CXF : 2.7.0 5.6. Security Token Service (STS) Issue, renew and validate security tokens in context of WS-Trust . 5.6.1. Maven coordinates Create a new project using quarkus-cxf-services-sts on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-services-sts</artifactId> </dependency> 5.6.2. Supported standards WS-Trust 5.6.3. Usage Here are the key parts of a basic WS-Trust scenario: WS-SecurityPolicy - except for defining security requirements, such as transport protocols, encryption and signing, it can also contain an <IssuedToken> assertion. It specifies the requirements and constraints for these security tokens that the client must adhere to when accessing the service. Security Token Service (STS) - issues, validates, and renews security tokens upon request. It acts as a trusted authority that authenticates clients and issues tokens that assert the client's identity and permissions. Client - requests a token from the STS to access a web service. It must authenticate itself to the STS and provide details about the kind of token required. Service - relies on the STS to authenticate clients and validate their tokens. 5.6.3.1. Runnable example There is an integration test covering WS-Trust in the Quarkus CXF source tree. Let's walk through it and see how the individual parts are set to work together. 5.6.3.1.1. WS-SecurityPolicy The policy is located in asymmetric-saml2-policy.xml file. Its key part is the <IssuedToken> assertion requiring a SAML 2.0 token: asymmetric-saml2-policy.xml <sp:IssuedToken sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient"> <sp:RequestSecurityTokenTemplate> <t:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType> <t:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</t:KeyType> </sp:RequestSecurityTokenTemplate> <wsp:Policy> <sp:RequireInternalReference /> </wsp:Policy> <sp:Issuer> <wsaws:Address>http://localhost:8081/services/sts</wsaws:Address> <wsaws:Metadata xmlns:wsdli="http://www.w3.org/2006/01/wsdl-instance" wsdli:wsdlLocation="http://localhost:8081/services/sts?wsdl"> <wsaw:ServiceName xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:stsns="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" EndpointName="UT_Port">stsns:SecurityTokenService</wsaw:ServiceName> </wsaws:Metadata> </sp:Issuer> </sp:IssuedToken> 5.6.3.1.2. Security Token Service (STS) The STS is implemented in Sts.java : Sts.java @WebServiceProvider(serviceName = "SecurityTokenService", portName = "UT_Port", targetNamespace = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/", wsdlLocation = "ws-trust-1.4-service.wsdl") public class Sts extends SecurityTokenServiceProvider { public Sts() throws Exception { super(); StaticSTSProperties props = new StaticSTSProperties(); props.setSignatureCryptoProperties("stsKeystore.properties"); props.setSignatureUsername("sts"); props.setCallbackHandlerClass(StsCallbackHandler.class.getName()); props.setIssuer("SampleSTSIssuer"); List<ServiceMBean> services = new LinkedList<ServiceMBean>(); StaticService service = new StaticService(); final Config config = ConfigProvider.getConfig(); final int port = LaunchMode.current().equals(LaunchMode.TEST) ? config.getValue("quarkus.http.test-port", Integer.class) : config.getValue("quarkus.http.port", Integer.class); service.setEndpoints(Arrays.asList( "http://localhost:" + port + "/services/hello-ws-trust", "http://localhost:" + port + "/services/hello-ws-trust-actas", "http://localhost:" + port + "/services/hello-ws-trust-onbehalfof")); services.add(service); TokenIssueOperation issueOperation = new TokenIssueOperation(); issueOperation.setServices(services); issueOperation.getTokenProviders().add(new SAMLTokenProvider()); // required for OnBehalfOf issueOperation.getTokenValidators().add(new UsernameTokenValidator()); // added for OnBehalfOf and ActAs issueOperation.getDelegationHandlers().add(new UsernameTokenDelegationHandler()); issueOperation.setStsProperties(props); TokenValidateOperation validateOperation = new TokenValidateOperation(); validateOperation.getTokenValidators().add(new SAMLTokenValidator()); validateOperation.setStsProperties(props); this.setIssueOperation(issueOperation); this.setValidateOperation(validateOperation); } } and configured in application.properties : application.properties quarkus.cxf.endpoint."/sts".implementor = io.quarkiverse.cxf.it.ws.trust.sts.Sts quarkus.cxf.endpoint."/sts".logging.enabled = pretty quarkus.cxf.endpoint."/sts".security.signature.username = sts quarkus.cxf.endpoint."/sts".security.signature.password = password quarkus.cxf.endpoint."/sts".security.validate.token = false quarkus.cxf.endpoint."/sts".security.signature.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint."/sts".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.endpoint."/sts".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password quarkus.cxf.endpoint."/sts".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.file" = sts.pkcs12 5.6.3.1.3. Service The service is implemented in TrustHelloServiceImpl.java : TrustHelloServiceImpl.java @WebService(portName = "TrustHelloServicePort", serviceName = "TrustHelloService", targetNamespace = "https://quarkiverse.github.io/quarkiverse-docs/quarkus-cxf/test/ws-trust", endpointInterface = "io.quarkiverse.cxf.it.ws.trust.server.TrustHelloService") public class TrustHelloServiceImpl implements TrustHelloService { @WebMethod @Override public String hello(String person) { return "Hello " + person + "!"; } } The asymmetric-saml2-policy.xml mentioned above is set in the Service Endpoint Interface TrustHelloService.java : TrustHelloServiceImpl.java @WebService(targetNamespace = "https://quarkiverse.github.io/quarkiverse-docs/quarkus-cxf/test/ws-trust") @Policy(placement = Policy.Placement.BINDING, uri = "classpath:/asymmetric-saml2-policy.xml") public interface TrustHelloService { @WebMethod @Policies({ @Policy(placement = Policy.Placement.BINDING_OPERATION_INPUT, uri = "classpath:/io-policy.xml"), @Policy(placement = Policy.Placement.BINDING_OPERATION_OUTPUT, uri = "classpath:/io-policy.xml") }) String hello(String person); } The service endpoint is configured in application.properties : application.properties quarkus.cxf.endpoint."/hello-ws-trust".implementor = io.quarkiverse.cxf.it.ws.trust.server.TrustHelloServiceImpl quarkus.cxf.endpoint."/hello-ws-trust".logging.enabled = pretty quarkus.cxf.endpoint."/hello-ws-trust".security.signature.username = service quarkus.cxf.endpoint."/hello-ws-trust".security.signature.password = password quarkus.cxf.endpoint."/hello-ws-trust".security.signature.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint."/hello-ws-trust".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.endpoint."/hello-ws-trust".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password quarkus.cxf.endpoint."/hello-ws-trust".security.signature.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = service quarkus.cxf.endpoint."/hello-ws-trust".security.signature.properties."org.apache.ws.security.crypto.merlin.file" = service.pkcs12 quarkus.cxf.endpoint."/hello-ws-trust".security.encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint."/hello-ws-trust".security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.endpoint."/hello-ws-trust".security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password quarkus.cxf.endpoint."/hello-ws-trust".security.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = service quarkus.cxf.endpoint."/hello-ws-trust".security.encryption.properties."org.apache.ws.security.crypto.merlin.file" = service.pkcs12 5.6.3.1.4. Client Finally, for the SOAP client to be able to communicate with the service, its STSClient needs to be configured. It can be done in application.properties : application.properties quarkus.cxf.client.hello-ws-trust.security.sts.client.wsdl = http://localhost:USD{quarkus.http.test-port}/services/sts?wsdl quarkus.cxf.client.hello-ws-trust.security.sts.client.service-name = {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}SecurityTokenService quarkus.cxf.client.hello-ws-trust.security.sts.client.endpoint-name = {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}UT_Port quarkus.cxf.client.hello-ws-trust.security.sts.client.username = client quarkus.cxf.client.hello-ws-trust.security.sts.client.password = password quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.username = sts quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = client quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties."org.apache.ws.security.crypto.merlin.keystore.file" = client.pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.username = client quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties."org.apache.ws.security.crypto.provider" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties."org.apache.ws.security.crypto.merlin.keystore.type" = pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties."org.apache.ws.security.crypto.merlin.keystore.password" = password quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties."org.apache.ws.security.crypto.merlin.keystore.alias" = client quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties."org.apache.ws.security.crypto.merlin.keystore.file" = client.pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.usecert = true Tip The properties for configuring the STS client are provided by the io.quarkiverse.cxf:quarkus-cxf-rt-ws-security extension and documented on the quarkus-cxf-rt-ws-security reference page . Alternatively, the client can be set as a bean reference: application.properties quarkus.cxf.client.hello-ws-trust-bean.security.sts.client = #stsClientBean In that case, the @Named bean needs to be produced programmatically, e.g. using @jakarta.enterprise.inject.Produces : BeanProducers.java import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Produces; import jakarta.inject.Named; import org.apache.cxf.ws.security.SecurityConstants; import io.quarkiverse.cxf.ws.security.sts.client.STSClientBean; public class BeanProducers { /** * Create and configure an STSClient for use by the TrustHelloService client. */ @Produces @ApplicationScoped @Named("stsClientBean") STSClientBean createSTSClient() { /* * We cannot use org.apache.cxf.ws.security.trust.STSClient as a return type of this bean producer method * because it does not have a no-args constructor. STSClientBean is a subclass of STSClient having one. */ STSClientBean stsClient = STSClientBean.create(); stsClient.setWsdlLocation("http://localhost:8081/services/sts?wsdl"); stsClient.setServiceQName(new QName("http://docs.oasis-open.org/ws-sx/ws-trust/200512/", "SecurityTokenService")); stsClient.setEndpointQName(new QName("http://docs.oasis-open.org/ws-sx/ws-trust/200512/", "UT_Port")); Map<String, Object> props = stsClient.getProperties(); props.put(SecurityConstants.USERNAME, "client"); props.put(SecurityConstants.PASSWORD, "password"); props.put(SecurityConstants.ENCRYPT_PROPERTIES, Thread.currentThread().getContextClassLoader().getResource("clientKeystore.properties")); props.put(SecurityConstants.ENCRYPT_USERNAME, "sts"); props.put(SecurityConstants.STS_TOKEN_USERNAME, "client"); props.put(SecurityConstants.STS_TOKEN_PROPERTIES, Thread.currentThread().getContextClassLoader().getResource("clientKeystore.properties")); props.put(SecurityConstants.STS_TOKEN_USE_CERT_FOR_KEYINFO, "true"); return stsClient; } } 5.7. HTTP Async Transport Implement async SOAP Clients using Apache HttpComponents HttpClient 5. 5.7.1. Maven coordinates Create a new project using quarkus-cxf-rt-transports-http-hc5 on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-transports-http-hc5</artifactId> </dependency> 5.7.2. Usage Once the quarkus-cxf-rt-transports-http-hc5 dependency is available in the classpath, CXF will use HttpAsyncClient for asynchronous calls and will continue using HttpURLConnection for synchronous calls. 5.7.2.1. Generate async methods Asynchronous client invocations require some additional methods in the service endpoint interface. That code is not generated by default. To enable it, you need to create a JAX-WS binding file with enableAsyncMapping set to true : Tip The sample code snippets used in this section come from the HC5 integration test in the source tree of Quarkus CXF src/main/resources/wsdl/async-binding.xml <?xml version="1.0"?> <bindings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns="https://jakarta.ee/xml/ns/jaxws" wsdlLocation="CalculatorService.wsdl"> <bindings node="wsdl:definitions"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings> This file should then be passed to wsdl2java through its additional-params property: application.properties quarkus.cxf.codegen.wsdl2java.includes = wsdl/*.wsdl quarkus.cxf.codegen.wsdl2java.additional-params = -b,src/main/resources/wsdl/async-binding.xml 5.7.2.2. Asynchronous Clients and Mutiny Once the asynchronous stubs are available, it is possible to wrap a client call in io.smallrye.mutiny.Uni as shown below: package io.quarkiverse.cxf.hc5.it; import java.util.concurrent.Future; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.QueryParam; import jakarta.ws.rs.core.MediaType; import org.jboss.eap.quickstarts.wscalculator.calculator.AddResponse; import org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService; import io.quarkiverse.cxf.annotation.CXFClient; import io.smallrye.mutiny.Uni; @Path("/hc5") public class Hc5Resource { @Inject @CXFClient("myCalculator") // name used in application.properties CalculatorService myCalculator; @SuppressWarnings("unchecked") @Path("/add-async") @GET @Produces(MediaType.TEXT_PLAIN) public Uni<Integer> addAsync(@QueryParam("a") int a, @QueryParam("b") int b) { return Uni.createFrom() .future( (Future<AddResponse>) myCalculator .addAsync(a, b, res -> { })) .map(addResponse -> addResponse.getReturn()); } } 5.7.2.3. Thread pool Asynchronous clients delivered by this extension leverage ManagedExecutor with a thread pool provided by Quarkus. The thread pool can be configured using the quarkus.thread-pool.* family of options . As a consequence of this, the executor and thread pool related attributes of org.apache.cxf.transports.http.configuration.HTTPClientPolicy are not honored for async clients on Quarkus. Tip You can see more details about the CXF asynchronous client and how to tune it further in CXF documentation . 5.8. XJC Plugins XJC plugins for wsdl2java code generation. You'll need to add this extension if you want to use any of the following in quarkus.cxf.codegen.wsdl2java.additional-params : -xjc-Xbg - generate getFoo() instead of isFoo() accessor methods for boolean fields. -xjc-Xdv - let the generated getter methods return the default value defined in the schema unless the field is set explicitly. -xjc-Xjavadoc - generate JavaDoc based on xs:documentation present in the schema. -xjc-Xproperty-listener - add PropertyChangeListener support to the generated beans. -xjc-Xts - generate toString() methods in model classes. -xjc-Xwsdlextension - generate beans that can be used directly with WSDL4J as extensors in the WSDL. Tip Check the wsdl2java section of User guide for more details about wsdl2java . 5.8.1. Maven coordinates Create a new project using quarkus-cxf-xjc-plugins on code.quarkus.redhat.com or add these coordinates to your existing project: <dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-xjc-plugins</artifactId> </dependency> | [
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf</artifactId> </dependency>",
"Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts",
"Parameters for the foo package quarkus.cxf.java2ws.foo-params.includes = org.foo.* quarkus.cxf.java2ws.foo-params.additional-params = -servicename,FruitService Parameters for the bar package quarkus.cxf.java2ws.bar-params.includes = org.bar.* quarkus.cxf.java2ws.bar-params.additional-params = -servicename,HelloService",
"quarkus.cxf.decoupled-endpoint-base = https://api.example.com:USD{quarkus.http.ssl-port}USD{quarkus.cxf.path} or for plain HTTP quarkus.cxf.decoupled-endpoint-base = http://api.example.com:USD{quarkus.http.port}USD{quarkus.cxf.path}",
"import java.util.Map; import jakarta.inject.Inject; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.MediaType; import jakarta.ws.rs.core.UriInfo; import jakarta.xml.ws.BindingProvider; import io.quarkiverse.cxf.annotation.CXFClient; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/my-rest\") public class MyRestEasyResource { @Inject @CXFClient(\"hello\") HelloService helloService; @ConfigProperty(name = \"quarkus.cxf.path\") String quarkusCxfPath; @POST @Path(\"/hello\") @Produces(MediaType.TEXT_PLAIN) public String hello(String body, @Context UriInfo uriInfo) throws IOException { // You may consider doing this only once if you are sure that your service is accessed // through a single hostname String decoupledEndpointBase = uriInfo.getBaseUriBuilder().path(quarkusCxfPath); Map>String, Object< requestContext = ((BindingProvider) helloService).getRequestContext(); requestContext.put(\"org.apache.cxf.ws.addressing.decoupled.endpoint.base\", decoupledEndpointBase); return wsrmHelloService.hello(body); } }",
"Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts",
"Parameters for the foo package quarkus.cxf.java2ws.foo-params.includes = org.foo.* quarkus.cxf.java2ws.foo-params.additional-params = -servicename,FruitService Parameters for the bar package quarkus.cxf.java2ws.bar-params.includes = org.bar.* quarkus.cxf.java2ws.bar-params.additional-params = -servicename,HelloService",
"quarkus.cxf.endpoint.\"/hello\".features = org.apache.cxf.ext.logging.LoggingFeature quarkus.cxf.endpoint.\"/fruit\".features = #myCustomLoggingFeature",
"import org.apache.cxf.ext.logging.LoggingFeature; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.inject.Produces; class Producers { @Produces @ApplicationScoped LoggingFeature myCustomLoggingFeature() { LoggingFeature loggingFeature = new LoggingFeature(); loggingFeature.setPrettyLogging(true); return loggingFeature; } }",
"quarkus.cxf.endpoint.\"/my-endpoint\".features = org.apache.cxf.ext.logging.LoggingFeature",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-metrics</artifactId> </dependency>",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-metrics</artifactId> </dependency>",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>",
"quarkus.micrometer.export.json.enabled = true quarkus.micrometer.export.json.path = metrics/json quarkus.micrometer.export.prometheus.path = metrics/prometheus",
"mvn quarkus:dev",
"curl -d '<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"><soap:Body><ns2:helloResponse xmlns:ns2=\"http://it.server.metrics.cxf.quarkiverse.io/\"><return>Hello Joe!</return></ns2:helloResponse></soap:Body></soap:Envelope>' -H 'Content-Type: text/xml' -X POST http://localhost:8080/metrics/client/hello",
"curl http://localhost:8080/q/metrics/json metrics: { \"cxf.server.requests\": { \"count;exception=None;faultCode=None;method=POST;operation=hello;outcome=SUCCESS;status=200;uri=/soap/hello\": 2, \"elapsedTime;exception=None;faultCode=None;method=POST;operation=hello;outcome=SUCCESS;status=200;uri=/soap/hello\": 64.0 }, }",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-integration-tracing-opentelemetry</artifactId> </dependency>",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-ws-security</artifactId> </dependency>",
"<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <wsp:Policy wsu:Id=\"SecurityServiceEncryptThenSignPolicy\" xmlns:wsp=\"http://www.w3.org/ns/ws-policy\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:ExactlyOne> <wsp:All> 1 <sp:AsymmetricBinding xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:Policy> 2 <sp:InitiatorToken> <wsp:Policy> <sp:X509Token sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssX509V3Token11/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:InitiatorToken> <sp:RecipientToken> <wsp:Policy> <sp:X509Token sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/Never\"> <wsp:Policy> <sp:WssX509V3Token11/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:RecipientToken> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic256/> </wsp:Policy> </sp:AlgorithmSuite> <sp:Layout> <wsp:Policy> <sp:Strict/> </wsp:Policy> </sp:Layout> <sp:IncludeTimestamp/> <sp:ProtectTokens/> <sp:OnlySignEntireHeadersAndBody/> <sp:EncryptBeforeSigning/> </wsp:Policy> </sp:AsymmetricBinding> 3 <sp:SignedParts xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <sp:Body/> </sp:SignedParts> 4 <sp:EncryptedParts xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <sp:Body/> </sp:EncryptedParts> <sp:Wss10 xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <wsp:Policy> <sp:MustSupportRefIssuerSerial/> </wsp:Policy> </sp:Wss10> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"@WebService(serviceName = \"EncryptSignPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"encrypt-sign-policy.xml\") public interface EncryptSignPolicyHelloService extends AbstractHelloService { }",
"A service with encrypt-sign-policy.xml set quarkus.cxf.endpoint.\"/helloEncryptSign\".implementor = io.quarkiverse.cxf.it.security.policy.EncryptSignPolicyHelloServiceImpl can be jks or pkcs12 - set from Maven profiles in this test keystore.type = USD{keystore.type} Signature settings quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.username = bob quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.password = bob-keystore-password quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = USD{keystore.type} quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = bob-keystore-password quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = bob quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = bob-keystore.USD{keystore.type} Encryption settings quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.username = alice quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = USD{keystore.type} quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = bob-keystore-password quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = bob quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = bob-keystore.USD{keystore.type}",
"A client with encrypt-sign-policy.xml set quarkus.cxf.client.helloEncryptSign.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloEncryptSign quarkus.cxf.client.helloEncryptSign.service-interface = io.quarkiverse.cxf.it.security.policy.EncryptSignPolicyHelloService quarkus.cxf.client.helloEncryptSign.features = #messageCollector The client-endpoint-url above is HTTPS, so we have to setup the server's SSL certificates quarkus.cxf.client.helloEncryptSign.trust-store = client-truststore.USD{keystore.type} quarkus.cxf.client.helloEncryptSign.trust-store-password = client-truststore-password Signature settings quarkus.cxf.client.helloEncryptSign.security.signature.username = alice quarkus.cxf.client.helloEncryptSign.security.signature.password = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = alice quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = alice-keystore.USD{keystore.type} Encryption settings quarkus.cxf.client.helloEncryptSign.security.encryption.username = bob quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = alice quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = alice-keystore.USD{keystore.type}",
"Clone the repository git clone https://github.com/quarkiverse/quarkus-cxf.git -o upstream cd quarkus-cxf Build the whole source tree mvn clean install -DskipTests -Dquarkus.build.skip Run the test cd integration-tests/ws-security-policy mvn clean test -Dtest=EncryptSignPolicyTest",
"[prefix].signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks",
"[prefix].encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks",
"<wsp:Policy wsu:Id=\"SecurityServiceEncryptThenSignPolicy\" xmlns:wsp=\"http://www.w3.org/ns/ws-policy\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:ExactlyOne> <wsp:All> <sp:AsymmetricBinding xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:Policy> <sp:AlgorithmSuite> <wsp:Policy> <sp:CustomAlgorithmSuite/> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:AsymmetricBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"[prefix].encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks",
"[prefix].token.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].token.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].token.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks",
"[prefix].signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks",
"[prefix].encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks",
"<wsp:Policy wsu:Id=\"SecurityServiceEncryptThenSignPolicy\" xmlns:wsp=\"http://www.w3.org/ns/ws-policy\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:ExactlyOne> <wsp:All> <sp:AsymmetricBinding xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:Policy> <sp:AlgorithmSuite> <wsp:Policy> <sp:CustomAlgorithmSuite/> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:AsymmetricBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-ws-rm</artifactId> </dependency>",
"cd test-util-parent/test-ws-rm-server-jvm mvn clean install",
"cd ../../integration-tests/ws-rm-client mvn clean test",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-services-sts</artifactId> </dependency>",
"<sp:IssuedToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <sp:RequestSecurityTokenTemplate> <t:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType> <t:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</t:KeyType> </sp:RequestSecurityTokenTemplate> <wsp:Policy> <sp:RequireInternalReference /> </wsp:Policy> <sp:Issuer> <wsaws:Address>http://localhost:8081/services/sts</wsaws:Address> <wsaws:Metadata xmlns:wsdli=\"http://www.w3.org/2006/01/wsdl-instance\" wsdli:wsdlLocation=\"http://localhost:8081/services/sts?wsdl\"> <wsaw:ServiceName xmlns:wsaw=\"http://www.w3.org/2006/05/addressing/wsdl\" xmlns:stsns=\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\" EndpointName=\"UT_Port\">stsns:SecurityTokenService</wsaw:ServiceName> </wsaws:Metadata> </sp:Issuer> </sp:IssuedToken>",
"@WebServiceProvider(serviceName = \"SecurityTokenService\", portName = \"UT_Port\", targetNamespace = \"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\", wsdlLocation = \"ws-trust-1.4-service.wsdl\") public class Sts extends SecurityTokenServiceProvider { public Sts() throws Exception { super(); StaticSTSProperties props = new StaticSTSProperties(); props.setSignatureCryptoProperties(\"stsKeystore.properties\"); props.setSignatureUsername(\"sts\"); props.setCallbackHandlerClass(StsCallbackHandler.class.getName()); props.setIssuer(\"SampleSTSIssuer\"); List<ServiceMBean> services = new LinkedList<ServiceMBean>(); StaticService service = new StaticService(); final Config config = ConfigProvider.getConfig(); final int port = LaunchMode.current().equals(LaunchMode.TEST) ? config.getValue(\"quarkus.http.test-port\", Integer.class) : config.getValue(\"quarkus.http.port\", Integer.class); service.setEndpoints(Arrays.asList( \"http://localhost:\" + port + \"/services/hello-ws-trust\", \"http://localhost:\" + port + \"/services/hello-ws-trust-actas\", \"http://localhost:\" + port + \"/services/hello-ws-trust-onbehalfof\")); services.add(service); TokenIssueOperation issueOperation = new TokenIssueOperation(); issueOperation.setServices(services); issueOperation.getTokenProviders().add(new SAMLTokenProvider()); // required for OnBehalfOf issueOperation.getTokenValidators().add(new UsernameTokenValidator()); // added for OnBehalfOf and ActAs issueOperation.getDelegationHandlers().add(new UsernameTokenDelegationHandler()); issueOperation.setStsProperties(props); TokenValidateOperation validateOperation = new TokenValidateOperation(); validateOperation.getTokenValidators().add(new SAMLTokenValidator()); validateOperation.setStsProperties(props); this.setIssueOperation(issueOperation); this.setValidateOperation(validateOperation); } }",
"quarkus.cxf.endpoint.\"/sts\".implementor = io.quarkiverse.cxf.it.ws.trust.sts.Sts quarkus.cxf.endpoint.\"/sts\".logging.enabled = pretty quarkus.cxf.endpoint.\"/sts\".security.signature.username = sts quarkus.cxf.endpoint.\"/sts\".security.signature.password = password quarkus.cxf.endpoint.\"/sts\".security.validate.token = false quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.file\" = sts.pkcs12",
"@WebService(portName = \"TrustHelloServicePort\", serviceName = \"TrustHelloService\", targetNamespace = \"https://quarkiverse.github.io/quarkiverse-docs/quarkus-cxf/test/ws-trust\", endpointInterface = \"io.quarkiverse.cxf.it.ws.trust.server.TrustHelloService\") public class TrustHelloServiceImpl implements TrustHelloService { @WebMethod @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }",
"@WebService(targetNamespace = \"https://quarkiverse.github.io/quarkiverse-docs/quarkus-cxf/test/ws-trust\") @Policy(placement = Policy.Placement.BINDING, uri = \"classpath:/asymmetric-saml2-policy.xml\") public interface TrustHelloService { @WebMethod @Policies({ @Policy(placement = Policy.Placement.BINDING_OPERATION_INPUT, uri = \"classpath:/io-policy.xml\"), @Policy(placement = Policy.Placement.BINDING_OPERATION_OUTPUT, uri = \"classpath:/io-policy.xml\") }) String hello(String person); }",
"quarkus.cxf.endpoint.\"/hello-ws-trust\".implementor = io.quarkiverse.cxf.it.ws.trust.server.TrustHelloServiceImpl quarkus.cxf.endpoint.\"/hello-ws-trust\".logging.enabled = pretty quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.username = service quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.password = password quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = service quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = service.pkcs12 quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = service quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = service.pkcs12",
"quarkus.cxf.client.hello-ws-trust.security.sts.client.wsdl = http://localhost:USD{quarkus.http.test-port}/services/sts?wsdl quarkus.cxf.client.hello-ws-trust.security.sts.client.service-name = {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}SecurityTokenService quarkus.cxf.client.hello-ws-trust.security.sts.client.endpoint-name = {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}UT_Port quarkus.cxf.client.hello-ws-trust.security.sts.client.username = client quarkus.cxf.client.hello-ws-trust.security.sts.client.password = password quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.username = sts quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = client quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.file\" = client.pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.username = client quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = client quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.file\" = client.pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.usecert = true",
"quarkus.cxf.client.hello-ws-trust-bean.security.sts.client = #stsClientBean",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Produces; import jakarta.inject.Named; import org.apache.cxf.ws.security.SecurityConstants; import io.quarkiverse.cxf.ws.security.sts.client.STSClientBean; public class BeanProducers { /** * Create and configure an STSClient for use by the TrustHelloService client. */ @Produces @ApplicationScoped @Named(\"stsClientBean\") STSClientBean createSTSClient() { /* * We cannot use org.apache.cxf.ws.security.trust.STSClient as a return type of this bean producer method * because it does not have a no-args constructor. STSClientBean is a subclass of STSClient having one. */ STSClientBean stsClient = STSClientBean.create(); stsClient.setWsdlLocation(\"http://localhost:8081/services/sts?wsdl\"); stsClient.setServiceQName(new QName(\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\", \"SecurityTokenService\")); stsClient.setEndpointQName(new QName(\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\", \"UT_Port\")); Map<String, Object> props = stsClient.getProperties(); props.put(SecurityConstants.USERNAME, \"client\"); props.put(SecurityConstants.PASSWORD, \"password\"); props.put(SecurityConstants.ENCRYPT_PROPERTIES, Thread.currentThread().getContextClassLoader().getResource(\"clientKeystore.properties\")); props.put(SecurityConstants.ENCRYPT_USERNAME, \"sts\"); props.put(SecurityConstants.STS_TOKEN_USERNAME, \"client\"); props.put(SecurityConstants.STS_TOKEN_PROPERTIES, Thread.currentThread().getContextClassLoader().getResource(\"clientKeystore.properties\")); props.put(SecurityConstants.STS_TOKEN_USE_CERT_FOR_KEYINFO, \"true\"); return stsClient; } }",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-transports-http-hc5</artifactId> </dependency>",
"<?xml version=\"1.0\"?> <bindings xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns=\"https://jakarta.ee/xml/ns/jaxws\" wsdlLocation=\"CalculatorService.wsdl\"> <bindings node=\"wsdl:definitions\"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings>",
"quarkus.cxf.codegen.wsdl2java.includes = wsdl/*.wsdl quarkus.cxf.codegen.wsdl2java.additional-params = -b,src/main/resources/wsdl/async-binding.xml",
"package io.quarkiverse.cxf.hc5.it; import java.util.concurrent.Future; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.QueryParam; import jakarta.ws.rs.core.MediaType; import org.jboss.eap.quickstarts.wscalculator.calculator.AddResponse; import org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService; import io.quarkiverse.cxf.annotation.CXFClient; import io.smallrye.mutiny.Uni; @Path(\"/hc5\") public class Hc5Resource { @Inject @CXFClient(\"myCalculator\") // name used in application.properties CalculatorService myCalculator; @SuppressWarnings(\"unchecked\") @Path(\"/add-async\") @GET @Produces(MediaType.TEXT_PLAIN) public Uni<Integer> addAsync(@QueryParam(\"a\") int a, @QueryParam(\"b\") int b) { return Uni.createFrom() .future( (Future<AddResponse>) myCalculator .addAsync(a, b, res -> { })) .map(addResponse -> addResponse.getReturn()); } }",
"<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-xjc-plugins</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_quarkus_reference/ass-camel-quarkus-cxf-reference |
4.335. util-linux-ng | 4.335. util-linux-ng 4.335.1. RHSA-2011:1691 - Low: util-linux-ng security, bug fix, and enhancement update Updated util-linux-ng packages that fix multiple security issues, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The util-linux-ng packages contain a large variety of low-level system utilities that are necessary for a Linux operating system to function. Security Fixes CVE-2011-1675 , CVE-2011-1677 Multiple flaws were found in the way the mount and umount commands performed mtab (mounted file systems table) file updates. A local, unprivileged user allowed to mount or unmount file systems could use these flaws to corrupt the mtab file and create a stale lock file, preventing other users from mounting and unmounting file systems. Bug Fixes BZ# 675999 Due to a hard coded limit of 128 devices, an attempt to run the "blkid -c" command on more than 128 devices caused blkid to terminate unexpectedly. This update increases the maximum number of devices to 8192 so that blkid no longer crashes in this scenario. BZ# 679741 Previously, the "swapon -a" command did not detect device-mapper devices that were already in use. This update corrects the swapon utility to detect such devices as expected. BZ# 684203 Prior to this update, the presence of an invalid line in the /etc/fstab file could cause the umount utility to terminate unexpectedly with a segmentation fault. This update applies a patch that corrects this error so that umount now correctly reports invalid lines and no longer crashes. BZ# 696959 Previously, an attempt to use the wipefs utility on a partitioned device caused the utility to terminate unexpectedly with an error. This update adapts wipefs to only display a warning message in this situation. BZ# 712158 When providing information on interprocess communication (IPC) facilities, the ipcs utility could previously display a process owner as a negative number if the user's UID was too large. This update adapts the underlying source code to make sure the UID values are now displayed correctly. BZ# 712808 In the installation scriptlets, the uuidd package uses the chkconfig utility to enable and disable the uuidd service. Previously, this package did not depend on the chkconfig package, which could lead to errors during installation if chkconfig was not installed. This update adds chkconfig to the list of dependencies so that such errors no longer occur. BZ# 716995 The version of the /etc/udev/rules.d/60-raw.rules file contained a statement that both this file and raw devices are deprecated. This is no longer true and the Red Hat Enterprise Linux kernel supports this functionality. With this update, the aforementioned file no longer contains this incorrect statement. BZ# 723352 Previously, an attempt to use the cfdisk utility to read the default Red Hat Enterprise Linux 6 partition layout failed with an error. This update corrects this error and the cfdisk utility can now read the default partition layout as expected. BZ# 679831 The version of the tailf(1) manual page incorrectly stated that users can use the "--lines=NUMBER" command line option to limit the number of displayed lines. However, the tailf utility does not allow the use of the equals sign (=) between the option and its argument. This update corrects this error. BZ# 694648 The fstab(5) manual page has been updated to clarify that empty lines in the /etc/fstab configuration file are ignored. Enhancements BZ# 692119 A new fstrim utility has been added to the package. This utility allows the root user to discard unused blocks on a mounted file system. BZ# 696731 The login utility has been updated to provide support for failed login attempts that are reported by PAM. BZ# 723638 The lsblk utility has been updated to provide additional information about the topology and status of block devices. BZ# 726092 The agetty utility has been updated to pass the hostname to the login utility. All users of util-linux-ng are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/util-linux-ng |
Chapter 2. Support policy for Red Hat build of OpenJDK | Chapter 2. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. Important Full support for Red Hat build of OpenJDK 11 ends on 31 October 2024. For more information, see Red Hat build of OpenJDK 11 - End of full support . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.25/rn-openjdk-support-policy |
Chapter 7. Configuring a basic overcloud | Chapter 7. Configuring a basic overcloud An overcloud with a basic configuration contains no custom features. To configure a basic Red Hat OpenStack Platform (RHOSP) environment, you must perform the following tasks: Register the bare-metal nodes for your overcloud. Provide director with an inventory of the hardware of the bare-metal nodes. Tag each bare metal node with a resource class that matches the node to its designated role. Tip You can add advanced configuration options to this basic overcloud and customize it to your specifications. For more information, see Advanced Overcloud Customization . 7.1. Registering nodes for the overcloud Director requires a node definition template that specifies the hardware and power management details of your nodes. You can create this template in JSON format, nodes.json , or YAML format, nodes.yaml . Procedure Create a template named nodes.json or nodes.yaml that lists your nodes. Use the following JSON and YAML template examples to understand how to structure your node definition template: Example JSON template Example YAML template This template contains the following attributes: name The logical name for the node. ports The port to access the specific IPMI device. You can define the following optional port attributes: address : The MAC address for the network interface on the node. Use only the MAC address for the Provisioning NIC of each system. physical_network : The physical network that is connected to the Provisioning NIC. local_link_connection : If you use IPv6 provisioning and LLDP does not correctly populate the local link connection during introspection, you must include fake data with the switch_id and port_id fields in the local_link_connection parameter. For more information on how to include fake data, see Using director introspection to collect bare metal node hardware information . cpu (Optional) The number of CPUs on the node. memory (Optional) The amount of memory in MB. disk (Optional) The size of the hard disk in GB. arch (Optional) The system architecture. Important When building a multi-architecture cloud, the arch key is mandatory to distinguish nodes using x86_64 and ppc64le architectures. pm_type The power management driver that you want to use. This example uses the IPMI driver ( ipmi ). Note IPMI is the preferred supported power management driver. For more information about supported power management types and their options, see Power management drivers . If these power management drivers do not work as expected, use IPMI for your power management. pm_user; pm_password The IPMI username and password. pm_addr The IP address of the IPMI device. After you create the template, run the following commands to verify the formatting and syntax: Important You must also include the --http-boot /var/lib/ironic/tftpboot/ option for multi-architecture nodes. Save the file to the home directory of the stack user ( /home/stack/nodes.json ). Import the template to director to register each node from the template into director: Note If you use UEFI boot mode, you must also set the boot mode on each node. If you introspect your nodes without setting UEFI boot mode, the nodes boot in legacy mode. For more information, see Setting the boot mode to UEFI boot mode . Wait for the node registration and configuration to complete. When complete, confirm that director has successfully registered the nodes: 7.2. Creating an inventory of the bare-metal node hardware Director needs the hardware inventory of the nodes in your Red Hat OpenStack Platform (RHOSP) deployment for profile tagging, benchmarking, and manual root disk assignment. You can provide the hardware inventory to director by using one of the following methods: Automatic: You can use director's introspection process, which collects the hardware information from each node. This process boots an introspection agent on each node. The introspection agent collects hardware data from the node and sends the data back to director. Director stores the hardware data in the Object Storage service (swift) running on the undercloud node. Manual: You can manually configure a basic hardware inventory for each bare metal machine. This inventory is stored in the Bare Metal Provisioning service (ironic) and is used to manage and deploy the bare-metal machines. Note You must use director's automatic introspection process if you use derive_params.yaml for your overcloud, which requires introspection data to be present. For more information on derive_params.yaml , see Workflows and derived parameters . The director automatic introspection process provides the following advantages over the manual method for setting the Bare Metal Provisioning service ports: Introspection records all of the connected ports in the hardware information, including the port to use for PXE boot if it is not already configured in nodes.yaml . Introspection sets the local_link_connection attribute for each port if the attribute is discoverable using LLDP. When you use the manual method, you must configure local_link_connection for each port when you register the nodes. Introspection sets the physical_network attribute for the Bare Metal Provisioning service ports when deploying a spine-and-leaf or DCN architecture. 7.2.1. Using director introspection to collect bare metal node hardware information After you register a physical machine as a bare metal node, you can automatically add its hardware details and create ports for each of its Ethernet MAC addresses by using director introspection. Tip As an alternative to automatic introspection, you can manually provide director with the hardware information for your bare metal nodes. For more information, see Manually configuring bare metal node hardware information . Prerequisites You have registered the bare-metal nodes for your overcloud. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the pre-introspection validation group to check the introspection requirements: Review the results of the validation report. Optional: Review detailed output from a specific validation: Replace <validation> with the UUID of the specific validation from the report that you want to review. Important A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment. Inspect the hardware attributes of each node. You can inspect the hardware attributes of all nodes, or specific nodes: Inspect the hardware attributes of all nodes: Use the --all-manageable option to introspect only the nodes that are in a managed state. In this example, all nodes are in a managed state. Use the --provide option to reset all nodes to an available state after introspection. Inspect the hardware attributes of specific nodes: Use the --provide option to reset all the specified nodes to an available state after introspection. Replace <node1> , [node2] , and all nodes up to [noden] with the UUID of each node that you want to introspect. Monitor the introspection progress logs in a separate terminal window: Important Ensure that the introspection process runs to completion. Introspection usually takes 15 minutes for bare metal nodes. However, incorrectly sized introspection networks can cause it to take much longer, which can result in the introspection failing. Optional: If you have configured your undercloud for bare metal provisioning over IPv6, then you need to also check that LLDP has set the local_link_connection for Bare Metal Provisioning service (ironic) ports: If the Local Link Connection field is empty for the port on your bare metal node, you must populate the local_link_connection value manually with fake data. The following example sets the fake switch ID to 52:54:00:00:00:00 , and the fake port ID to p0 : Verify that the Local Link Connection field contains the fake data: After the introspection completes, all nodes change to an available state. 7.2.2. Manually configuring bare-metal node hardware information After you register a physical machine as a bare metal node, you can manually add its hardware details and create bare-metal ports for each of its Ethernet MAC addresses. You must create at least one bare-metal port before deploying the overcloud. Tip As an alternative to manual introspection, you can use the automatic director introspection process to collect the hardware information for your bare metal nodes. For more information, see Using director introspection to collect bare metal node hardware information . Prerequisites You have registered the bare-metal nodes for your overcloud. You have configured local_link_connection for each port on the registered nodes in nodes.json . For more information, see Registering nodes for the overcloud . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Set the boot option to local for each registered node by adding boot_option':'local to the capabilities of the node: Replace <node> with the ID of the bare metal node. Specify the deploy kernel and deploy ramdisk for the node driver: Replace <node> with the ID of the bare metal node. Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel . Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk . Update the node properties to match the hardware specifications on the node: Replace <node> with the ID of the bare metal node. Replace <cpu> with the number of CPUs. Replace <ram> with the RAM in MB. Replace <disk> with the disk size in GB. Replace <arch> with the architecture type. Optional: Specify the IPMI cipher suite for each node: Replace <node> with the ID of the bare metal node. Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values: 3 - The node uses the AES-128 with SHA1 cipher suite. 17 - The node uses the AES-128 with SHA256 cipher suite. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment: Replace <node> with the ID of the bare metal node. Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}' RHOSP supports the following properties: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names. Note If you specify more than one property, the device must match all of those properties. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network: Replace <node_uuid> with the unique ID of the bare metal node. Replace <mac_address> with the MAC address of the NIC used to PXE boot. Validate the configuration of the node: The validation output Result indicates the following: False : The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'] , this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation. True : The interface has passed validation. None : The interface is not supported for your driver. 7.3. Tagging nodes into profiles After you register and inspect the hardware of each node, tag the nodes into specific profiles. These profile tags match your nodes to flavors, which assigns the flavors to deployment roles. The following example shows the relationships across roles, flavors, profiles, and nodes for Controller nodes: Type Description Role The Controller role defines how director configures Controller nodes. Flavor The control flavor defines the hardware profile for nodes to use as controllers. You assign this flavor to the Controller role so that director can decide which nodes to use. Profile The control profile is a tag you apply to the control flavor. This defines the nodes that belong to the flavor. Node You also apply the control profile tag to individual nodes, which groups them to the control flavor and, as a result, director configures them using the Controller role. Default profile flavors compute , control , swift-storage , ceph-storage , and block-storage are created during undercloud installation and are usable without modification in most environments. Procedure To tag a node into a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag a specific node to use a specific profile, use the following commands: Set the USDNODE variable to the name or UUID of the node. Set the USDPROFILE variable to the specific profile, such as control or compute . The profile option in properties/capabilities includes the USDPROFILE variable to tag the node with the corresponding profile, such as profile:control or profile:compute . Set the boot_option:local option to define how each node boots. You can also retain existing capabilities values using an additional openstack baremetal node show command and jq filtering: After you complete node tagging, check the assigned profiles or possible profiles: 7.4. Setting the boot mode to UEFI mode The default boot mode is Legacy BIOS mode. You can configure the nodes in your RHOSP deployment to use UEFI boot mode instead of Legacy BIOS boot mode. Warning Some hardware does not support Legacy BIOS boot mode. If you attempt to use Legacy BIOS boot mode on hardware that does not support Legacy BIOS boot mode your deployment might fail. To ensure that your hardware deploys successfully, use UEFI boot mode. Note If you enable UEFI boot mode, you must build your own whole-disk image that includes a partitioning layout and bootloader, along with the user image. For more information about creating whole-disk images, see Creating whole-disk images . Procedure Set the following parameters in your undercloud.conf file: Save the undercloud.conf file and run the undercloud installation: Wait until the installation script completes. Check the existing capabilities of each registered node: Replace <node> with the ID of the bare metal node. Set the boot mode to uefi for each registered node by adding boot_mode:uefi to the existing capabilities of the node: Replace <node> with the ID of the bare metal node. Replace <capability_1> , and all capabilities up to <capability_n> , with each capability that you retrieved in step 3. For example, use the following command to set the boot mode to uefi with local boot: Set the boot mode to uefi for each bare metal flavor: 7.5. Enabling virtual media boot Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image. Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead. To boot a node with the redfish hardware type over virtual media, set the boot interface to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot. Prerequisites Redfish driver enabled in the enabled_hardware_types parameter in the undercloud.conf file. A bare metal node registered and enrolled. IPA and instance images in the Image Service (glance). For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance). A bare metal flavor. A network for cleaning and provisioning. Procedure Set the Bare Metal service (ironic) boot interface to redfish-virtual-media : Replace USDNODE_NAME with the name of the node. For UEFI nodes, set the boot mode to uefi : Replace USDNODE with the name of the node. Note For BIOS nodes, do not complete this step. For UEFI nodes, define the EFI System Partition (ESP) image: Replace USDESP with the glance image UUID or URL for the ESP image, and replace USDNODE_NAME with the name of the node. Note For BIOS nodes, do not complete this step. Create a port on the bare metal node and associate the port with the MAC address of the NIC on the bare metal node: Replace USDUUID with the UUID of the bare metal node, and replace USDMAC_ADDRESS with the MAC address of the NIC on the bare metal node. 7.6. Defining the root disk for multi-disk clusters Most Ceph Storage nodes use multiple disks. When nodes use multiple disks, director must identify the root disk. By default, director writes the overcloud image to the root disk during the provisioning process. Use this procedure to identify the root device by serial number. For more information about other properties you can use to identify the root disk, see Section 7.7, "Properties that identify the root disk" . Procedure Verify the disk information from the hardware introspection of each node. The following command to displays the disk information of a node: For example, the data for one node might show three disks: On the undercloud, set the root disk for a node. Include the most appropriate hardware attribute value to define the root disk. For example, to set the root device to disk 2, which has the serial number 61866da04f380d001ea4e13c12e36ad6 , enter the following command: Note Configure the BIOS of each node to boot from the root disk that you choose. Configure the boot order to boot from the network first, then from the root disk. Director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, director provisions and writes the overcloud image to the root disk. 7.7. Properties that identify the root disk There are several properties that you can define to help director identify the root disk: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1. Important Use the name property only for devices with persistent names. Do not use name to set the root disk for any other devices because this value can change when the node boots. 7.8. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement By default, director writes the QCOW2 overcloud-full image to the root disk during the provisioning process. The overcloud-full image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal image, for example, to provision a bare OS where you do not want to run any other OpenStack services and consume your subscription entitlements. A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this and similar use cases, you can use the overcloud-minimal image option to avoid reaching the limit of your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal image, see Obtaining images for overcloud nodes . Note A Red Hat OpenStack Platform (RHOSP) subscription contains Open vSwitch (OVS), but core services, such as OVS, are not available when you use the overcloud-minimal image. OVS is not required to deploy Ceph Storage nodes. Use linux_bond instead of ovs_bond to define bonds. For more information about linux_bond , see Linux bonding options . Procedure To configure director to use the overcloud-minimal image, create an environment file that contains the following image definition: Replace <roleName> with the name of the role and append Image to the name of the role. The following example shows an overcloud-minimal image for Ceph storage nodes: In the roles_data.yaml role definition file, set the rhsm_enforce parameter to False . Pass the environment file to the openstack overcloud deploy command. Note The overcloud-minimal image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement. 7.9. Creating architecture specific roles When building a multi-architecture cloud, you must add any architecture specific roles to the roles_data.yaml file. The following example includes the ComputePPC64LE role along with the default roles: The Creating a Custom Role File section has information on roles. 7.10. Environment files The undercloud includes a set of heat templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core heat template collection. You can include as many environment files as necessary. However, the order of the environment files is important because the parameters and resources that you define in subsequent environment files take precedence. Use the following list as an example of the environment file order: The number of nodes and the flavors for each role. It is vital to include this information for overcloud creation. The location of the container images for containerized OpenStack services. Any network isolation files, starting with the initialization file ( environments/network-isolation.yaml ) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations. For more information, see the following chapters in the Advanced Overcloud Customization guide: "Basic network isolation" "Custom composable networks" "Custom network interface templates" Any external load balancing environment files if you are using an external load balancer. For more information, see External Load Balancing for the Overcloud . Any storage environment files such as Ceph Storage, NFS, or iSCSI. Any environment files for Red Hat CDN or Satellite registration. Any other custom environment files. Note Open Virtual Networking (OVN) is the default networking mechanism driver in Red Hat OpenStack Platform 16.2. If you want to use OVN with distributed virtual routing (DVR), you must include the environments/services/neutron-ovn-dvr-ha.yaml file in the openstack overcloud deploy command. If you want to use OVN without DVR, you must include the environments/services/neutron-ovn-ha.yaml file in the openstack overcloud deploy command. Red Hat recommends that you organize your custom environment files in a separate directory, such as the templates directory. For more information about customizing advanced features for your overcloud, see the Advanced Overcloud Customization guide. Important A basic overcloud uses local LVM storage for block storage, which is not a supported configuration. It is recommended to use an external storage solution, such as Red Hat Ceph Storage, for block storage. Note The environment file extension must be .yaml or .template , or it will not be treated as a custom template resource. The few sections contain information about creating some environment files necessary for your overcloud. 7.11. Creating an environment file that defines node counts and flavors By default, director deploys an overcloud with 1 Controller node and 1 Compute node using the baremetal flavor. However, this is only suitable for a proof-of-concept deployment. You can override the default configuration by specifying different node counts and flavors. For a small-scale production environment, deploy at least 3 Controller nodes and 3 Compute nodes, and assign specific flavors to ensure that the nodes have the appropriate resource specifications. Complete the following steps to create an environment file named node-info.yaml that stores the node counts and flavor assignments. Procedure Create a node-info.yaml file in the /home/stack/templates/ directory: Edit the file to include the node counts and flavors that you need. This example contains 3 Controller nodes and 3 Compute nodes: 7.12. Creating an environment file for undercloud CA trust If your undercloud uses TLS and the Certificate Authority (CA) is not publicly trusted, you can use the CA for SSL endpoint encryption that the undercloud operates. To ensure that the undercloud endpoints are accessible to the rest of your deployment, configure your overcloud nodes to trust the undercloud CA. Note For this approach to work, your overcloud nodes must have a network route to the public endpoint on the undercloud. It is likely that you must apply this configuration for deployments that rely on spine-leaf networking. There are two types of custom certificates you can use in the undercloud: User-provided certificates - This definition applies when you have provided your own certificate. This can be from your own CA, or it can be self-signed. This is passed using the undercloud_service_certificate option. In this case, you must either trust the self-signed certificate, or the CA (depending on your deployment). Auto-generated certificates - This definition applies when you use certmonger to generate the certificate using its own local CA. Enable auto-generated certificates with the generate_service_certificate option in the undercloud.conf file. In this case, director generates a CA certificate at /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and the director configures the undercloud's HAProxy instance to use a server certificate. Add the CA certificate to the inject-trust-anchor-hiera.yaml file to present the certificate to OpenStack Platform. This example uses a self-signed certificate located in /home/stack/ca.crt.pem . If you use auto-generated certificates, use /etc/pki/ca-trust/source/anchors/cm-local-ca.pem instead. Procedure Open the certificate file and copy only the certificate portion. Do not include the key: The certificate portion you need looks similar to this shortened example: Create a new YAML file called /home/stack/inject-trust-anchor-hiera.yaml with the following contents, and include the certificate you copied from the PEM file: Note The certificate string must follow the PEM format. Note The CAMap parameter might contain other certificates relevant to SSL/TLS configuration. Director copies the CA certificate to each overcloud node during the overcloud deployment. As a result, each node trusts the encryption presented by the undercloud's SSL endpoints. For more information about environment files, see Section 7.16, "Including environment files in an overcloud deployment" . 7.13. Disabling TSX on new deployments From Red Hat Enterprise Linux 8.3 onwards, the kernel disables support for the Intel Transactional Synchronization Extensions (TSX) feature by default. You must explicitly disable TSX for new overclouds unless you strictly require it for your workloads or third party vendors. Set the KernelArgs heat parameter in an environment file. Include the environment file when you run your openstack overcloud deploy command. Additional resources "Guidance on Intel TSX impact on OpenStack guests (applies for RHEL 8.3 and above)" 7.14. Deployment command The final stage in creating your OpenStack environment is to run the openstack overcloud deploy command to create the overcloud. Before you run this command, familiarize yourself with key options and how to include custom environment files. Warning Do not run openstack overcloud deploy as a background process. The overcloud creation might hang mid-deployment if you run it as a background process. 7.15. Deployment command options The following table lists the additional parameters for the openstack overcloud deploy command. Important Some options are available in this release as a Technology Preview and therefore are not fully supported by Red Hat. They should only be used for testing and should not be used in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Table 7.1. Deployment command options Parameter Description --templates [TEMPLATES] The directory that contains the heat templates that you want to deploy. If blank, the deployment command uses the default template location at /usr/share/openstack-tripleo-heat-templates/ --stack STACK The name of the stack that you want to create or update -t [TIMEOUT] , --timeout [TIMEOUT] The deployment timeout duration in minutes --libvirt-type [LIBVIRT_TYPE] The virtualization type that you want to use for hypervisors --ntp-server [NTP_SERVER] The Network Time Protocol (NTP) server that you want to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: --ntp-server 0.centos.pool.org,1.centos.pool.org . For a high availability cluster deployment, it is essential that your Controller nodes are consistently referring to the same time source. Note that a typical environment might already have a designated NTP time source with established practices. --no-proxy [NO_PROXY] Defines custom values for the environment variable no_proxy , which excludes certain host names from proxy communication. --overcloud-ssh-user OVERCLOUD_SSH_USER Defines the SSH user to access the overcloud nodes. Normally SSH access occurs through the heat-admin user. --overcloud-ssh-key OVERCLOUD_SSH_KEY Defines the key path for SSH access to overcloud nodes. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Defines the network name that you want to use for SSH access to overcloud nodes. -e [EXTRA HEAT TEMPLATE] , --environment-file [ENVIRONMENT FILE] Extra environment files that you want to pass to the overcloud deployment. You can specify this option more than once. Note that the order of environment files that you pass to the openstack overcloud deploy command is important. For example, parameters from each sequential environment file override the same parameters from earlier environment files. --environment-directory A directory that contains environment files that you want to include in deployment. The deployment command processes these environment files in numerical order, then alphabetical order. -r ROLES_FILE Defines the roles file and overrides the default roles_data.yaml in the --templates directory. The file location can be an absolute path or the path relative to --templates . -n NETWORKS_FILE Defines the networks file and overrides the default network_data.yaml in the --templates directory. The file location can be an absolute path or the path relative to --templates . -p PLAN_ENVIRONMENT_FILE Defines the plan Environment file and overrides the default plan-environment.yaml in the --templates directory. The file location can be an absolute path or the path relative to --templates . --no-cleanup Use this option if you do not want to delete temporary files after deployment, and log their location. --update-plan-only Use this option if you want to update the plan without performing the actual deployment. --validation-errors-nonfatal The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. --validation-warnings-fatal The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks. openstack-tripleo-validations --dry-run Use this option if you want to perform a validation check on the overcloud without creating the overcloud. --run-validations Use this option to run external validations from the openstack-tripleo-validations package. --skip-postconfig Use this option to skip the overcloud post-deployment configuration. --force-postconfig Use this option to force the overcloud post-deployment configuration. --skip-deploy-identifier Use this option if you do not want the deployment command to generate a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps only trigger if there is an actual change to the configuration. Use this option with caution and only if you are confident that you do not need to run the software configuration, such as scaling out certain roles. --answers-file ANSWERS_FILE The path to a YAML file with arguments and parameters. --disable-password-generation Use this option if you want to disable password generation for the overcloud services. --deployed-server Use this option if you want to deploy pre-provisioned overcloud nodes. Used in conjunction with --disable-validations . --no-config-download, --stack-only Use this option if you want to disable the config-download workflow and create only the stack and associated OpenStack resources. This command applies no software configuration to the overcloud. --config-download-only Use this option if you want to disable the overcloud stack creation and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR The directory that you want to use for saved config-download output. The directory must be writeable by the mistral user. When not specified, director uses the default, which is /var/lib/mistral/overcloud . --override-ansible-cfg OVERRIDE_ANSIBLE_CFG The path to an Ansible configuration file. The configuration in the file overrides any configuration that config-download generates by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT The timeout duration in minutes that you want to use for config-download steps. If unset, director sets the default to the amount of time remaining from the --timeout parameter after the stack deployment operation. --limit NODE1,NODE2 Use this option with a comma-separated list of nodes to limit the config-download playbook execution to a specific node or set of nodes. For example, the --limit option can be useful for scale-up operations, when you want to run config-download only on new nodes. This argument might cause live migration of instances between hosts to fail, see Running config-download with the ansible-playbook-command.sh script --tags TAG1,TAG2 (Technology Preview) Use this option with a comma-separated list of tags from the config-download playbook to run the deployment with a specific set of config-download tasks. --skip-tags TAG1,TAG2 (Technology Preview) Use this option with a comma-separated list of tags that you want to skip from the config-download playbook. Run the following command to view a full list of options: Some command line parameters are outdated or deprecated in favor of using heat template parameters, which you include in the parameter_defaults section in an environment file. The following table maps deprecated parameters to their heat template equivalents. Table 7.2. Mapping deprecated CLI parameters to heat template parameters Parameter Description Heat template parameter --control-scale The number of Controller nodes to scale out ControllerCount --compute-scale The number of Compute nodes to scale out ComputeCount --ceph-storage-scale The number of Ceph Storage nodes to scale out CephStorageCount --block-storage-scale The number of Block Storage (cinder) nodes to scale out BlockStorageCount --swift-storage-scale The number of Object Storage (swift) nodes to scale out ObjectStorageCount --control-flavor The flavor that you want to use for Controller nodes OvercloudControllerFlavor --compute-flavor The flavor that you want to use for Compute nodes OvercloudComputeFlavor --ceph-storage-flavor The flavor that you want to use for Ceph Storage nodes OvercloudCephStorageFlavor --block-storage-flavor The flavor that you want to use for Block Storage (cinder) nodes OvercloudBlockStorageFlavor --swift-storage-flavor The flavor that you want to use for Object Storage (swift) nodes OvercloudSwiftStorageFlavor --validation-errors-fatal The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option because any errors can cause your deployment to fail. No parameter mapping --disable-validations Disable the pre-deployment validations entirely. These validations were built-in pre-deployment validations, which have been replaced with external validations from the openstack-tripleo-validations package. No parameter mapping --config-download Run deployment using the config-download mechanism. This is now the default and this CLI options may be removed in the future. No parameter mapping --rhel-reg Use this option to register overcloud nodes to the Customer Portal or Satellite 6. RhsmVars --reg-method Use this option to define the registration method that you want to use for the overcloud nodes. satellite for Red Hat Satellite 6 or Red Hat Satellite 5, portal for Customer Portal. RhsmVars --reg-org [REG_ORG] The organization that you want to use for registration. RhsmVars --reg-force Use this option to register the system even if it is already registered. RhsmVars --reg-sat-url [REG_SAT_URL] The base URL of the Satellite server to register overcloud nodes. Use the Satellite HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com . The overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If the server is a Red Hat Satellite 6 server, the overcloud obtains the katello-ca-consumer-latest.noarch.rpm file, registers with subscription-manager , and installs katello-agent . If the server is a Red Hat Satellite 5 server, the overcloud obtains the RHN-ORG-TRUSTED-SSL-CERT file and registers with rhnreg_ks . RhsmVars --reg-activation-key [REG_ACTIVATION_KEY] Use this option to define the activation key that you want to use for registration. RhsmVars These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform. 7.16. Including environment files in an overcloud deployment Use the -e option to include an environment file to customize your overcloud. You can include as many environment files as necessary. However, the order of the environment files is important because the parameters and resources that you define in subsequent environment files take precedence. Any environment files that you add to the overcloud using the -e option become part of the stack definition of the overcloud. The following command is an example of how to start the overcloud creation using environment files defined earlier in this scenario: This command contains the following additional options: --templates Creates the overcloud using the heat template collection in /usr/share/openstack-tripleo-heat-templates as a foundation. -e /home/stack/templates/node-info.yaml Adds an environment file to define how many nodes and which flavors to use for each role. -e /home/stack/containers-prepare-parameter.yaml Adds the container image preparation environment file. You generated this file during the undercloud installation and can use the same file for your overcloud creation. -e /home/stack/inject-trust-anchor-hiera.yaml Adds an environment file to install a custom certificate in the undercloud. -r /home/stack/templates/roles_data.yaml (Optional) The generated roles data if you use custom roles or want to enable a multi architecture cloud. For more information, see Section 7.9, "Creating architecture specific roles" . Director requires these environment files for re-deployment and post-deployment functions. Failure to include these files can result in damage to your overcloud. To modify the overcloud configuration at a later stage, perform the following actions: Modify parameters in the custom environment files and heat templates. Run the openstack overcloud deploy command again with the same environment files. Do not edit the overcloud configuration directly because director overrides any manual configuration when you update the overcloud stack. 7.17. Running the pre-deployment validation Run the pre-deployment validation group to check the deployment requirements. Procedure Source the stackrc file. This validation requires a copy of your overcloud plan. Upload your overcloud plan with all necessary environment files. To upload your plan only, run the openstack overcloud deploy command with the --update-plan-only option: Run the openstack tripleo validator run command with the --group pre-deployment option: If the overcloud uses a plan name that is different to the default overcloud name, set the plan name with the -- plan option: Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report: Important A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment. 7.18. Overcloud deployment output When the overcloud creation completes, director provides a recap of the Ansible plays that were executed to configure the overcloud: Director also provides details to access your overcloud. 7.19. Accessing the overcloud Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc , in the home directory of the stack user. Run the following command to use this file: This command loads the environment variables that are necessary to interact with your overcloud from the undercloud CLI. The command prompt changes to indicate this: To return to interacting with the undercloud, run the following command: 7.20. Running the post-deployment validation Run the post-deployment validation group to check the post-deployment state. Procedure Source the stackrc file. Run the openstack tripleo validator run command with the --group post-deployment option: If the overcloud uses a plan name that is different to the default overcloud name, set the plan name with the -- plan option: Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report: Important A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment. | [
"{ \"nodes\": [{ \"name\": \"node01\", \"ports\": [{ \"address\": \"aa:aa:aa:aa:aa:aa\", \"physical_network\": \"ctlplane\", \"local_link_connection\": { \"switch_id\": \"52:54:00:00:00:00\", \"port_id\": \"p0\" } }], \"cpu\": \"4\", \"memory\": \"6144\", \"disk\": \"40\", \"arch\": \"x86_64\", \"pm_type\": \"ipmi\", \"pm_user\": \"admin\", \"pm_password\": \"p@55w0rd!\", \"pm_addr\": \"192.168.24.205\" }, { \"name\": \"node02\", \"ports\": [{ \"address\": \"bb:bb:bb:bb:bb:bb\", \"physical_network\": \"ctlplane\", \"local_link_connection\": { \"switch_id\": \"52:54:00:00:00:00\", \"port_id\": \"p0\" } }], \"cpu\": \"4\", \"memory\": \"6144\", \"disk\": \"40\", \"arch\": \"x86_64\", \"pm_type\": \"ipmi\", \"pm_user\": \"admin\", \"pm_password\": \"p@55w0rd!\", \"pm_addr\": \"192.168.24.206\" }] }",
"nodes: - name: \"node01\" ports: - address: \"aa:aa:aa:aa:aa:aa\" physical_network: ctlplane local_link_connection: switch_id: \"52:54:00:00:00:00\" port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: \"x86_64\" pm_type: \"ipmi\" pm_user: \"admin\" pm_password: \"p@55w0rd!\" pm_addr: \"192.168.24.205\" - name: \"node02\" ports: - address: \"bb:bb:bb:bb:bb:bb\" physical_network: ctlplane local_link_connection: switch_id: \"52:54:00:00:00:00\" port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: \"x86_64\" pm_type: \"ipmi\" pm_user: \"admin\" pm_password: \"p@55w0rd!\" pm_addr: \"192.168.24.206\"",
"source ~/stackrc (undercloud)USD openstack overcloud node import --validate-only ~/nodes.json",
"(undercloud)USD openstack overcloud node import ~/nodes.json",
"(undercloud)USD openstack baremetal node list",
"source ~/stackrc",
"(undercloud)USD openstack tripleo validator run --group pre-introspection",
"(undercloud)USD openstack tripleo validator show run --full <validation>",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack overcloud node introspect --provide <node1> [node2] [noden]",
"(undercloud)USD sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log",
"(undercloud)USD openstack baremetal port list --long -c UUID -c \"Node UUID\" -c \"Local Link Connection\"",
"(undercloud)USD openstack baremetal port set <port_uuid> --local-link-connection switch_id=52:54:00:00:00:00 --local-link-connection port_id=p0",
"(undercloud)USD openstack baremetal port list --long -c UUID -c \"Node UUID\" -c \"Local Link Connection\"",
"source ~/stackrc",
"(undercloud)USD openstack baremetal node set --property capabilities=\"boot_option:local\" <node>",
"(undercloud)USD openstack baremetal node set <node> --driver-info deploy_kernel=<kernel_file> --driver-info deploy_ramdisk=<initramfs_file>",
"(undercloud)USD openstack baremetal node set <node> --property cpus=<cpu> --property memory_mb=<ram> --property local_gb=<disk> --property cpu_arch=<arch>",
"(undercloud)USD openstack baremetal node set <node> --driver-info ipmi_cipher_suite=<version>",
"(undercloud)USD openstack baremetal node set <node> --property root_device='{\"<property>\": \"<value>\"}'",
"(undercloud)USD openstack baremetal port create --node <node_uuid> <mac_address>",
"(undercloud)USD openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+",
"(undercloud) USD NODE=<NODE NAME OR ID> (undercloud) USD PROFILE=<PROFILE NAME> (undercloud) USD openstack baremetal node set --property capabilities=\"profile:USDPROFILE,boot_option:local\" USDNODE",
"(undercloud) USD openstack baremetal node set --property capabilities=\"profile:USDPROFILE,boot_option:local,USD(openstack baremetal node show USDNODE -f json -c properties | jq -r .properties.capabilities | sed \"s/boot_mode:[^,]*,//g\")\" USDNODE",
"(undercloud) USD openstack overcloud profiles list",
"ipxe_enabled = True",
"openstack undercloud install",
"openstack baremetal node show <node> -f json -c properties | jq -r .properties.capabilities",
"openstack baremetal node set --property capabilities=\"boot_mode:uefi,<capability_1>,...,<capability_n>\" <node>",
"openstack baremetal node set --property capabilities=\"boot_mode:uefi,boot_option:local\" <node>",
"openstack flavor set --property capabilities:boot_mode='uefi' <flavor>",
"openstack baremetal node set --boot-interface redfish-virtual-media USDNODE_NAME",
"NODE=<NODE NAME OR ID> ; openstack baremetal node set --property capabilities=\"boot_mode:uefi,USD(openstack baremetal node show USDNODE -f json -c properties | jq -r .properties.capabilities | sed \"s/boot_mode:[^,]*,//g\")\" USDNODE",
"openstack baremetal node set --driver-info bootloader=USDESP USDNODE_NAME",
"openstack baremetal port create --pxe-enabled True --node USDUUID USDMAC_ADDRESS",
"(undercloud)USD openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq \".inventory.disks\"",
"[ { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sda\", \"wwn_vendor_extension\": \"0x1ea4dcc412a9632b\", \"wwn_with_extension\": \"0x61866da04f3807001ea4dcc412a9632b\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380700\", \"serial\": \"61866da04f3807001ea4dcc412a9632b\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdb\", \"wwn_vendor_extension\": \"0x1ea4e13c12e36ad6\", \"wwn_with_extension\": \"0x61866da04f380d001ea4e13c12e36ad6\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f380d00\", \"serial\": \"61866da04f380d001ea4e13c12e36ad6\" } { \"size\": 299439751168, \"rotational\": true, \"vendor\": \"DELL\", \"name\": \"/dev/sdc\", \"wwn_vendor_extension\": \"0x1ea4e31e121cfb45\", \"wwn_with_extension\": \"0x61866da04f37fc001ea4e31e121cfb45\", \"model\": \"PERC H330 Mini\", \"wwn\": \"0x61866da04f37fc00\", \"serial\": \"61866da04f37fc001ea4e31e121cfb45\" } ]",
"(undercloud)USD openstack baremetal node set --property root_device='{\"serial\":\"<serial_number>\"}' <node-uuid>",
"(undercloud)USD openstack baremetal node set --property root_device='{\"serial\": \"61866da04f380d001ea4e13c12e36ad6\"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0",
"parameter_defaults: <roleName>Image: overcloud-minimal",
"parameter_defaults: CephStorageImage: overcloud-minimal",
"rhsm_enforce: False",
"openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o ~/templates/roles_data.yaml Controller Compute ComputePPC64LE BlockStorage ObjectStorage CephStorage",
"(undercloud) USD touch /home/stack/templates/node-info.yaml",
"parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeFlavor: compute ControllerCount: 3 ComputeCount: 3",
"vi /home/stack/ca.crt.pem",
"-----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE-----",
"parameter_defaults: CAMap: undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE-----",
"parameter_defaults: ComputeParameters: KernelArgs: \"tsx=off\"",
"(undercloud) USD openstack help overcloud deploy",
"(undercloud) USD openstack overcloud deploy --templates -e /home/stack/templates/node-info.yaml -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/inject-trust-anchor-hiera.yaml -r /home/stack/templates/roles_data.yaml \\",
"source ~/stackrc",
"openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml --update-plan-only",
"openstack tripleo validator run --group pre-deployment",
"openstack tripleo validator run --group pre-deployment --plan myovercloud",
"openstack tripleo validator show run --full <UUID>",
"PLAY RECAP ************************************************************* overcloud-compute-0 : ok=160 changed=67 unreachable=0 failed=0 overcloud-controller-0 : ok=210 changed=93 unreachable=0 failed=0 undercloud : ok=10 changed=7 unreachable=0 failed=0 Tuesday 15 October 2018 18:30:57 +1000 (0:00:00.107) 1:06:37.514 ****** ========================================================================",
"Ansible passed. Overcloud configuration completed. Overcloud Endpoint: http://192.168.24.113:5000 Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed",
"(undercloud) USD source ~/overcloudrc",
"(overcloud) USD",
"(overcloud) USD source ~/stackrc (undercloud) USD",
"source ~/stackrc",
"openstack tripleo validator run --group post-deployment",
"openstack tripleo validator run --group post-deployment --plan myovercloud",
"openstack tripleo validator show run --full <UUID>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_configuring-a-basic-overcloud |
Chapter 12. Network Observability CLI | Chapter 12. Network Observability CLI 12.1. Installing the Network Observability CLI The Network Observability CLI ( oc netobserv ) is deployed separately from the Network Observability Operator. The CLI is available as an OpenShift CLI ( oc ) plugin. It provides a lightweight way to quickly debug and troubleshoot with network observability. 12.1.1. About the Network Observability CLI You can quickly debug and troubleshoot networking issues by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator. Important CLI capture is meant to run only for short durations, such as 8-10 minutes. If it runs for too long, it can be difficult to delete the running process. 12.1.2. Installing the Network Observability CLI Installing the Network Observability CLI ( oc netobserv ) is a separate procedure from the Network Observability Operator installation. This means that, even if you have the Operator installed from OperatorHub, you need to install the CLI separately. Note You can optionally use Krew to install the netobserv CLI plugin. For more information, see "Installing a CLI plugin with Krew". Prerequisites You must install the OpenShift CLI ( oc ). You must have a macOS or Linux operating system. Procedure Download the oc netobserv file that corresponds with your architecture. For example, for the amd64 archive: USD curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64 Make the file executable: USD chmod +x ./oc-netobserv-amd64 Move the extracted netobserv-cli binary to a directory that is on your PATH , such as /usr/local/bin/ : USD sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv Verification Verify that oc netobserv is available: USD oc netobserv version Example output Netobserv CLI version <version> Additional resources Installing and using CLI plugins Installing the CLI Manager Operator 12.2. Using the Network Observability CLI You can visualize and filter the flows and packets data directly in the terminal to see specific usage, such as identifying who is using a specific port. The Network Observability CLI collects flows as JSON and database files or packets as a PCAP file, which you can use with third-party tools. 12.2.1. Capturing flows You can capture flows and filter on any resource or zone in the data to solve use cases, such as displaying Round-Trip Time (RTT) between two zones. Table visualization in the CLI provides viewing and flow search capabilities. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture flows with filters enabled by running the following command: USD oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to further refine the incoming flows. For example: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . The data that was captured is written to two separate files in an ./output directory located in the same path used to install the CLI. View the captured data in the ./output/flow/<capture_date_time>.json JSON file, which contains JSON arrays of the captured data. Example JSON file { "AgentIP": "10.0.1.76", "Bytes": 561, "DnsErrno": 0, "Dscp": 20, "DstAddr": "f904:ece9:ba63:6ac7:8018:1e5:7130:0", "DstMac": "0A:58:0A:80:00:37", "DstPort": 9999, "Duplicate": false, "Etype": 2048, "Flags": 16, "FlowDirection": 0, "IfDirection": 0, "Interface": "ens5", "K8S_FlowLayer": "infra", "Packets": 1, "Proto": 6, "SrcAddr": "3e06:6c10:6440:2:a80:37:b756:270f", "SrcMac": "0A:58:0A:80:00:01", "SrcPort": 46934, "TimeFlowEndMs": 1709741962111, "TimeFlowRttNs": 121000, "TimeFlowStartMs": 1709741962111, "TimeReceived": 1709741964 } You can use SQLite to inspect the ./output/flow/<capture_date_time>.db database file. For example: Open the file by running the following command: USD sqlite3 ./output/flow/<capture_date_time>.db Query the data by running a SQLite SELECT statement, for example: sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10; Example output 12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1 12.2.2. Capturing packets You can capture packets using the Network Observability CLI. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Run the packet capture with filters enabled: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to refine the incoming packets. An example filter is as follows: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . View the captured data, which is written to a single file in an ./output/pcap directory located in the same path that was used to install the CLI: The ./output/pcap/<capture_date_time>.pcap file can be opened with Wireshark. 12.2.3. Capturing metrics You can generate on-demand dashboards in Prometheus by using a service monitor for Network Observability. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture metrics with filters enabled by running the following command: Example output USD oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Open the link provided in the terminal to view the NetObserv / On-Demand dashboard: Example URL https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli Note Features that are not enabled present as empty graphs. 12.2.4. Cleaning the Network Observability CLI You can manually clean the CLI workload by running oc netobserv cleanup . This command removes all the CLI components from your cluster. When you end a capture, this command is run automatically by the client. You might be required to manually run it if you experience connectivity issues. Procedure Run the following command: USD oc netobserv cleanup Additional resources Network Observability CLI reference 12.3. Network Observability CLI (oc netobserv) reference The Network Observability CLI ( oc netobserv ) has most features and filtering options that are available for the Network Observability Operator. You can pass command line arguments to enable features or filtering options. 12.3.1. Network Observability CLI usage You can use the Network Observability CLI ( oc netobserv ) to pass command line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator. 12.3.1.1. Syntax The basic syntax for oc netobserv commands: oc netobserv syntax USD oc netobserv [<command>] [<feature_option>] [<command_options>] 1 1 1 Feature options can only be used with the oc netobserv flows command. They cannot be used with the oc netobserv packets command. 12.3.1.2. Basic commands Table 12.1. Basic commands Command Description flows Capture flows information. For subcommands, see the "Flows capture options" table. packets Capture packets data. For subcommands, see the "Packets capture options" table. metrics Capture metrics data. For subcommands, see the "Metrics capture options" table. follow Follow collector logs when running in background. stop Stop collection by removing agent daemonset. copy Copy collector generated files locally. cleanup Remove the Network Observability CLI components. version Print the software version. help Show help. 12.3.1.3. Flows capture options Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. oc netobserv flows syntax USD oc netobserv flows [<feature_option>] [<command_options>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: USD oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.4. Packets capture options You can filter packets capture data the as same as flows capture by using the filters. Certain features, such as packets drop, DNS, RTT, and network events, are only available for flows and metrics capture. oc netobserv packets syntax USD oc netobserv packets [<option>] Option Description Default --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - Example running packets capture on TCP protocol and port 49051: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.5. Metrics capture options You can enable features and use filters on metrics capture, the same as flows capture. The generated graphs fill accordingly in the dashboard. oc netobserv metrics syntax USD oc netobserv metrics [<option>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running metrics capture for TCP drops USD oc netobserv metrics --enable_pkt_drop --protocol=TCP | [
"curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64",
"chmod +x ./oc-netobserv-amd64",
"sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv",
"oc netobserv version",
"Netobserv CLI version <version>",
"oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }",
"sqlite3 ./output/flow/<capture_date_time>.db",
"sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;",
"12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli",
"oc netobserv cleanup",
"oc netobserv [<command>] [<feature_option>] [<command_options>] 1",
"oc netobserv flows [<feature_option>] [<command_options>]",
"oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv packets [<option>]",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv metrics [<option>]",
"oc netobserv metrics --enable_pkt_drop --protocol=TCP"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/network-observability-cli-1 |
Chapter 5. Configuring the web console in OpenShift Container Platform | Chapter 5. Configuring the web console in OpenShift Container Platform You can modify the OpenShift Container Platform web console to set a logout redirect URL or disable the quick start tutorials. 5.1. Prerequisites Deploy an OpenShift Container Platform cluster. 5.2. Configuring the web console You can configure the web console settings by editing the console.config.openshift.io resource. Edit the console.config.openshift.io resource: USD oc edit console.config.openshift.io cluster The following example displays the sample resource definition for the console: apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: "" 1 status: consoleURL: "" 2 1 Specify the URL of the page to load when a user logs out of the web console. If you do not specify a value, the user returns to the login page for the web console. Specifying a logoutRedirect URL allows your users to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 2 The web console URL. To update this to a custom value, see Customizing the web console URL . 5.3. Disabling quick starts in the web console You can use the Administrator perspective of the web console to disable one or more quick starts. Prerequisites You have cluster administrator permissions and are logged in to the web console. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. On the General tab, in the Quick starts section, you can select items in either the Enabled or Disabled list, and move them from one list to the other by using the arrow buttons. To enable or disable a single quick start, click the quick start, then use the single arrow buttons to move the quick start to the appropriate list. To enable or disable multiple quick starts at once, press Ctrl and click the quick starts you want to move. Then, use the single arrow buttons to move the quick starts to the appropriate list. To enable or disable all quick starts at once, click the double arrow buttons to move all of the quick starts to the appropriate list. | [
"oc edit console.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/web_console/configuring-web-console |
3.14. Configuring Virtual Machines in a Clustered Environment | 3.14. Configuring Virtual Machines in a Clustered Environment When you configure your cluster with virtual machine resources, you should use the rgmanager tools to start and stop the virtual machines. Using virsh to start the machine can result in the virtual machine running in more than one place, which can cause data corruption in the virtual machine. To reduce the chances of administrators accidentally "double-starting" virtual machines by using both cluster and non-cluster tools in a clustered environment, you can configure your system by storing the virtual machine configuration files in a non-default location. Storing the virtual machine configuration files somewhere other than their default location makes it more difficult to accidentally start a virtual machine using virsh , as the configuration file will be unknown out of the box to virsh . The non-default location for virtual machine configuration files may be anywhere. The advantage of using an NFS share or a shared GFS2 file system is that the administrator does not need to keep the configuration files in sync across the cluster members. However, it is also permissible to use a local directory as long as the administrator keeps the contents synchronized somehow cluster-wide. In the cluster configuration, virtual machines may reference this non-default location by using the path attribute of a virtual machine resource. Note that the path attribute is a directory or set of directories separated by the colon ':' character, not a path to a specific file. Warning The libvirt-guests service should be disabled on all the nodes that are running rgmanager . If a virtual machine autostarts or resumes, this can result in the virtual machine running in more than one place, which can cause data corruption in the virtual machine. For more information on the attributes of a virtual machine resources, see Table B.26, "Virtual Machine ( vm Resource)" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-vm-considerations-CA |
10.2. Configure Node Security in Library Mode | 10.2. Configure Node Security in Library Mode In Library mode, node authentication is configured directly in the JGroups configuration. JGroups can be configured so that nodes must authenticate each other when joining or merging with a cluster. The authentication uses SASL and is enabled by adding the SASL protocol to your JGroups XML configuration. SASL relies on JAAS notions, such as CallbackHandlers , to obtain certain information necessary for the authentication handshake. Users must supply their own CallbackHandlers on both client and server sides. Important The JAAS API is only available when configuring user authentication and authorization, and is not available for node security. Note In the provided example, CallbackHandler classes are examples only, and not contained in the Red Hat JBoss Data Grid release. Users must provide the appropriate CallbackHandler classes for their specific LDAP implementation. Example 10.4. Setting Up SASL Authentication in JGroups The above example uses the DIGEST-MD5 mechanism. Each node must declare the user and password it will use when joining the cluster. Important The SASL protocol must be placed before the GMS protocol in order for authentication to take effect. The following example demonstrates how to implement a CallbackHandler class. In this example, login and password are checked against values provided via Java properties when JBoss Data Grid is started, and authorization is checked against role which is defined in the class ( "test_user" ). Example 10.5. Callback Handler Class For authentication, specify the javax.security.auth.callback.NameCallback and javax.security.auth.callback.PasswordCallback callbacks For authorization, specify the callbacks required for authentication, as well as specifying the javax.security.sasl.AuthorizeCallback callback. Report a bug 10.2.1. Simple Authorizing Callback Handler For instances where a more complex Kerberos or LDAP approach is not needed the SimpleAuthorizingCallbackHandler class may be used. To enable this set both the server_callback_handler and the client_callback_handler to org.jgroups.auth.sasl.SimpleAuthorizingCallbackHandler , as seen in the below example: The SimpleAuthorizingCallbackHandler may be configured either programmatically, by passing the constructor an instance of of java.util.Properties , or via standard Java system properties, set on the command line using the -DpropertyName=propertyValue notation. The following properties are available: sasl.credentials.properties - the path to a property file which contains principal/credential mappings represented as principal=password . sasl.local.principal - the name of the principal that is used to identify the local node. It must exist in the sasl.credentials.properties file. sasl.roles.properties - (optional) the path to a property file which contains principal/roles mappings represented as principal=role1,role2,role3 . sasl.role - (optional) if present, authorizes joining nodes only if their principal is. sasl.realm - (optional) the name of the realm to use for the SASL mechanisms that require it Report a bug 10.2.2. Configure Node Authentication for Library Mode (DIGEST-MD5) The behavior of a node differs depending on whether it is the coordinator node or any other node. The coordinator acts as the SASL server, with the joining or merging nodes behaving as SASL clients. When using the DIGEST-MD5 mechanism in Library mode, the server and client callback must be specified so that the server and client are aware of how to obtain the credentials. Therefore, two CallbackHandlers are required: The server_callback_handler_class is used by the coordinator. The client_callback_handler_class is used by other nodes. The following example demonstrates these CallbackHandlers . Example 10.6. Callback Handlers JGroups is designed so that all nodes are able to act as coordinator or client depending on cluster behavior, so if the current coordinator node goes down, the node in the succession chain will become the coordinator. Given this behavior, both server and client callback handlers must be identified within SASL for Red Hat JBoss Data Grid implementations. Report a bug 10.2.3. Configure Node Authentication for Library Mode (GSSAPI) When performing node authentication in Library mode using the GSSAPI mechanism, the login_module_name parameter must be specified instead of callback . This login module is used to obtain a valid Kerberos ticket, which is used to authenticate a client to the server. The server_name must also be specified, as the client principal is constructed as jgroups/USDserver_name@REALM . Example 10.7. Specifying the login module and server on the coordinator node On the coordinator node, the server_callback_handler_class must be specified for node authorization. This will determine if the authenticated joining node has permission to join the cluster. Note The server principal is always constructed as jgroups/server_name , therefore the server principal in Kerberos must also be jgroups/server_name . For example, if the server name in Kerberos is jgroups/node1/mycache , then the server name must be node1/mycache . Report a bug 10.2.4. Node Authorization in Library Mode The SASL protocol in JGroups is concerned only with the authentication process. To implement node authorization, you can do so within the server callback handler by throwing an Exception. The following example demonstrates this. Example 10.8. Implementing Node Authorization Report a bug | [
"<SASL mech=\"DIGEST-MD5\" client_name=\"node_user\" client_password=\"node_password\" server_callback_handler_class=\"org.example.infinispan.security.JGroupsSaslServerCallbackHandler\" client_callback_handler_class=\"org.example.infinispan.security.JGroupsSaslClientCallbackHandler\" sasl_props=\"com.sun.security.sasl.digest.realm=test_realm\" />",
"public class SaslPropAuthUserCallbackHandler implements CallbackHandler { private static final String APPROVED_USER = \"test_user\"; private final String name; private final char[] password; private final String realm; public SaslPropAuthUserCallbackHandler() { this.name = System.getProperty(\"sasl.username\"); this.password = System.getProperty(\"sasl.password\").toCharArray(); this.realm = System.getProperty(\"sasl.realm\"); } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof PasswordCallback) { ((PasswordCallback) callback).setPassword(password); } else if (callback instanceof NameCallback) { ((NameCallback) callback).setName(name); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; if (APPROVED_USER.equals(authorizeCallback.getAuthorizationID())) { authorizeCallback.setAuthorized(true); } else { authorizeCallback.setAuthorized(false); } } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } } }",
"<SASL mech=\"DIGEST-MD5\" client_name=\"node_user\" client_password=\"node_password\" server_callback_handler_class=\"org.jgroups.auth.sasl.SimpleAuthorizingCallbackHandler\" client_callback_handler_class=\"org.jgroups.auth.sasl.SimpleAuthorizingCallbackHandler\" sasl_props=\"com.sun.security.sasl.digest.realm=test_realm\" />",
"<SASL mech=\"DIGEST-MD5\" client_name=\"node_name\" client_password=\"node_password\" client_callback_handler_class=\"USD{CLIENT_CALLBACK_HANDLER_IN_CLASSPATH}\" server_callback_handler_class=\"USD{SERVER_CALLBACK_HANDLER_IN_CLASSPATH}\" sasl_props=\"com.sun.security.sasl.digest.realm=test_realm\" />",
"<SASL mech=\"GSSAPI\" server_name=\"node0/clustered\" login_module_name=\"krb-node0\" server_callback_handler_class=\"org.infinispan.test.integration.security.utils.SaslPropCallbackHandler\" />",
"public class AuthorizingServerCallbackHandler implements CallbackHandler { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { <!-- Additional configuration information here --> if (callback instanceof AuthorizeCallback) { AuthorizeCallback acb = (AuthorizeCallback) callback; if (!\"myclusterrole\".equals(acb.getAuthenticationID()))) { throw new SecurityException(\"Unauthorized node \" +user); } <!-- Additional configuration information here --> } } }"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-configure_node_security_in_library_mode |
Chapter 4. Managing build output | Chapter 4. Managing build output Use the following sections for an overview of and instructions for managing build output. 4.1. Build output Builds that use the source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification. If the output kind is ImageStreamTag , then the image will be pushed to the integrated OpenShift image registry and tagged in the specified imagestream. If the output is of type DockerImage , then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build. Output to an ImageStreamTag spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" Output to a docker Push Specification spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag" 4.2. Output image environment variables source-to-image (S2I) strategy builds set the following environment variables on output images: Variable Description OPENSHIFT_BUILD_NAME Name of the build OPENSHIFT_BUILD_NAMESPACE Namespace of the build OPENSHIFT_BUILD_SOURCE The source URL of the build OPENSHIFT_BUILD_REFERENCE The Git reference used in the build OPENSHIFT_BUILD_COMMIT Source commit used in the build Additionally, any user-defined environment variable, for example those configured with S2I strategy options, will also be part of the output image environment variable list. 4.3. Output image labels source-to-image (S2I) builds set the following labels on output images: Label Description io.openshift.build.commit.author Author of the source commit used in the build io.openshift.build.commit.date Date of the source commit used in the build io.openshift.build.commit.id Hash of the source commit used in the build io.openshift.build.commit.message Message of the source commit used in the build io.openshift.build.commit.ref Branch or reference specified in the source io.openshift.build.source-location Source URL for the build You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration. Custom labels for built images spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com" | [
"spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"",
"spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"",
"spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\""
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_buildconfig/managing-build-output |
Chapter 5. Remediating nodes with Machine Health Checks | Chapter 5. Remediating nodes with Machine Health Checks Machine health checks automatically repair unhealthy machines in a particular machine pool. 5.1. About machine health checks Note You can only apply a machine health check to control plane machines on clusters that use control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 5.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . 5.2. Configuring machine health checks to use the Self Node Remediation Operator Use the following procedure to configure the worker or control-plane machine health checks to use the Self Node Remediation Operator as a remediation provider. Note To use the Self Node Remediation Operator as a remediation provider for machine health checks, a machine must have an associated node in the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SelfNodeRemediationTemplate CR: Define the SelfNodeRemediationTemplate CR: apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: namespace: openshift-machine-api name: selfnoderemediationtemplate-sample spec: template: spec: remediationStrategy: Automatic 1 1 Specifies the remediation strategy. The default remediation strategy is Automatic . To create the SelfNodeRemediationTemplate CR, run the following command: USD oc create -f <snrt-name>.yaml Create or update the MachineHealthCheck CR to point to the SelfNodeRemediationTemplate CR: Define or update the MachineHealthCheck CR: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: machine-health-check namespace: openshift-machine-api spec: selector: matchLabels: 1 machine.openshift.io/cluster-api-machine-role: "worker" machine.openshift.io/cluster-api-machine-type: "worker" unhealthyConditions: - type: "Ready" timeout: "300s" status: "False" - type: "Ready" timeout: "300s" status: "Unknown" maxUnhealthy: "40%" nodeStartupTimeout: "10m" remediationTemplate: 2 kind: SelfNodeRemediationTemplate apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: selfnoderemediationtemplate-sample 1 Selects whether the machine health check is for worker or control-plane nodes. The label can also be user-defined. 2 Specifies the details for the remediation template. To create a MachineHealthCheck CR, run the following command: USD oc create -f <mhc-name>.yaml To update a MachineHealthCheck CR, run the following command: USD oc apply -f <mhc-name>.yaml | [
"apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: namespace: openshift-machine-api name: selfnoderemediationtemplate-sample spec: template: spec: remediationStrategy: Automatic 1",
"oc create -f <snrt-name>.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: machine-health-check namespace: openshift-machine-api spec: selector: matchLabels: 1 machine.openshift.io/cluster-api-machine-role: \"worker\" machine.openshift.io/cluster-api-machine-type: \"worker\" unhealthyConditions: - type: \"Ready\" timeout: \"300s\" status: \"False\" - type: \"Ready\" timeout: \"300s\" status: \"Unknown\" maxUnhealthy: \"40%\" nodeStartupTimeout: \"10m\" remediationTemplate: 2 kind: SelfNodeRemediationTemplate apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: selfnoderemediationtemplate-sample",
"oc create -f <mhc-name>.yaml",
"oc apply -f <mhc-name>.yaml"
]
| https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/25.1/html/remediation_fencing_and_maintenance/machine-health-checks |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_release_notes/making-open-source-more-inclusive |
2.2. Creating and Maintaining Databases | 2.2. Creating and Maintaining Databases After creating suffixes to organizing the directory data, create databases to contain data of that directory. Note If you used the dsconf utility or the web console to create the suffix, Directory Server created the database automatically. 2.2.1. Creating Databases The directory tree can be distributed over multiple Directory Server databases. There are two ways to distribute data across multiple databases: One database per suffix. The data for each suffix is contained in a separate database. Three databases are added to store the data contained in separate suffixes: This division of the tree units corresponds to three databases, for example: In this example, DB1 contains the data for ou=people and the data for dc=example,dc=com , so that clients can conduct searches based at dc=example,dc=com . However, DB2 only contains the data for ou=groups , and DB3 only contains the data for ou=contractors : Multiple databases for one suffix. Suppose the number of entries in the ou=people branch of the directory tree is so large that two databases are needed to store them. In this case, the data contained by ou=people could be distributed across two databases: DB1 contains people with names from A-K , and DB2 contains people with names from L-Z . DB3 contains the ou=groups data, and DB4 contains the ou=contractors data. A custom plug-in distributes data from a single suffix across multiple databases. Contact Red Hat Consulting for information on how to create distribution logic for Directory Server. 2.2.1.1. Creating a New Database for a Single Suffix Using the Command Line Use the ldapmodify command-line utility to add a new database to the directory configuration file. The database configuration information is stored in the cn=ldbm database,cn=plugins,cn=config entry. To add a new database: Run ldapmodify and create the entry for the new database. The added entry corresponds to a database named UserData that contains the data for the root or sub-suffix ou=people,dc=example,dc=com . Create a root or a sub-suffix, as described in Section 2.1.1.1.1, "Creating a Root Suffix Using the Command Line" and Section 2.1.1.2.1, "Creating a Sub-suffix Using the Command Line" . The database name, given in the DN attribute, must correspond with the value in the nsslapd-backend attribute of the suffix entry. 2.2.1.2. Adding Multiple Databases for a Single Suffix A single suffix can be distributed across multiple databases. However, to distribute the suffix, a custom distribution function has to be created to extend the directory. For more information on creating a custom distribution function, contact Red Hat Consulting. Note Once entries have been distributed, they cannot be redistributed. The following restrictions apply: The distribution function cannot be changed once entry distribution has been deployed. The LDAP modrdn operation cannot be used to rename entries if that would cause them to be distributed into a different database. Distributed local databases cannot be replicated. The ldapmodify operation cannot be used to change entries if that would cause them to be distributed into a different database. Violating these restrictions prevents Directory Server from correctly locating and returning entries. After creating a custom distribution logic plug-in, add it to the directory. The distribution logic is a function declared in a suffix. This function is called for every operation reaching this suffix, including subtree search operations that start above the suffix. A distribution function can be inserted into a suffix using both the web console and the command line interface. To add a custom distribution function to a suffix: Run ldapmodify . Add the following attributes to the suffix entry itself, supplying the information about the custom distribution logic: The nsslapd-backend attribute specifies all databases associated with this suffix. The nsslapd-distribution-plugin attribute specifies the name of the library that the plug-in uses. The nsslapd-distribution-funct attribute provides the name of the distribution function itself. 2.2.2. Maintaining Directory Databases 2.2.2.1. Setting a Database in Read-Only Mode When a database is in read-only mode, you cannot create, modify, or delete any entries. One of the situations when read-only mode is useful is for manually initializing a consumer or before backing up or exporting data from Directory Server. Read-only mode ensures a faithful image of the state of these databases at a given time. The command-line utilities and the web console do not automatically put the directory in read-only mode before export or backup operations because this would make your directory unavailable for updates. However, with multi-supplier replication, this might not be a problem. 2.2.2.1.1. Setting a Database in Read-only Mode Using the Command Line To set a database in read-only mode, use the dsconf backend suffix set command. For example, to set the database of the o=test suffix in read-only mode: Display the suffixes and their corresponding back end: This command displays the name of the back end database to each suffix. You require the suffix's database name in the step. Set the database in read-only mode: 2.2.2.1.2. Setting a Database in Read-only Mode Using the Web Console To set a database in read-only mode: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Select Database Read-Only Mode . Click Save Configuration . 2.2.2.2. Placing the Entire Directory Server in Read-Only Mode If Directory Server maintains more than one database and all databases need to be placed in read-only mode, this can be done in a single operation. Warning This operation also makes Directory Server configuration read-only; therefore, you cannot update the server configuration, enable or disable plug-ins, or even restart Directory Server while it is in read-only mode. Once read-only mode is enabled, it cannot cannot be undone unless you manually modify the configuration files. Note If Directory Server contains replicas, do not use read-only mode because it will disable replication. 2.2.2.2.1. Placing the Entire Directory Server in Read-Only Mode Using the Command Line To enable the read-only mode for Directory Server: Set the nsslapd-readonly parameter to on : Restart the instance: 2.2.2.2.2. Placing the Entire Directory Server in Read-Only Mode Using the Web Console To enable the read-only mode for Directory Server: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select the Server Settings entry. On the Advanced Settings tab, select Server Read-Only . Click Save . 2.2.2.3. Deleting a Database If a suffix is no longer required, you can delete the database that stores the suffix. 2.2.2.3.1. Deleting a Database Using the Command Line To delete a database use the dsconf backend delete command. For example, to delete the database of the o=test suffix: Display the suffixes and their corresponding back end: You require the name of the back end database, which is displayed to the suffix, in the step. Delete the database: 2.2.2.3.2. Deleting a Database Using the Web Console To delete a database using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix to delete, click Suffix Tasks , and select Delete Suffix . Click Yes to confirm. 2.2.2.4. Changing the Transaction Log Directory The transaction log enables Directory Server to recover the database, after an instance shut down unexpectedly. In certain situations, administrators want to change the path to the transaction logs. For example, to store them on a different physical disk than Directory Server database. Note To achieve higher performance, mount a faster disk to the directory that contains the transaction logs, instead of changing the location. For details, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . To change the location of the transaction log directory: Stop Directory Server instance: Create a new location for the transaction logs. For example: Set permissions to enable only Directory Server to access the directory: Remove all __db.* files from the transaction log directory. For example: Move all log.* files from the to the new transaction log directory. For example: If SELinux is running in enforcing mode, set the dirsrv_var_lib_t context on the directory: Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file, and update the nsslapd-db-logdirectory parameter under the cn=config,cn=ldbm database,cn=plugins,cn=config entry. For example: Start the instance: | [
"ldapmodify -a -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=UserData,cn=ldbm database,cn=plugins,cn=config changetype: add objectclass: extensibleObject objectclass: nsBackendInstance nsslapd-suffix: ou=people,dc=example,dc=com",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x",
"dn: suffix changetype: modify add: nsslapd-backend nsslapd-backend: Database1 - add: nsslapd-backend nsslapd-backend: Database2 - add: nsslapd-backend nsslapd-backend: Database3 - add: nsslapd-distribution-plugin nsslapd-distribution-plugin: /full/name/of/a/shared/library - add: nsslapd-distribution-funct nsslapd-distribution-funct: distribution-function-name",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com (userroot) o=test ( test_database )",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --enable-readonly \" test_database \"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-readonly=on",
"dsctl instance_name restart",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com (userroot) o=test ( test_database )",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend delete \" test_database \"",
"dsctl instance_name stop",
"mkdir -p /srv/dirsrv/ instance_name /db/",
"chown dirsrv:dirsrv /srv/dirsrv/ instance_name /db/ chmod 770 /srv/dirsrv/ instance_name /db/",
"rm /var/lib/dirsrv/slapd- instance_name /db/__db.*",
"mv /var/lib/dirsrv/slapd- instance_name /db/log.* /srv/dirsrv/ instance_name /db/",
"semanage fcontext -a -t dirsrv_var_lib_t /srv/dirsrv/ instance_name /db/ restorecon -Rv /srv/dirsrv/ instance_name /db/",
"dn: cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-logdirectory: /srv/dirsrv/ instance_name /db/",
"dsctl instance_name start"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring_directory_databases-creating_and_maintaining_databases |
Integrating OpenShift Container Platform data into cost management | Integrating OpenShift Container Platform data into cost management Cost Management Service 1-latest Learn how to add and configure your OpenShift Container Platform integrations Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_openshift_container_platform_data_into_cost_management/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.