title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 15. Replacing storage nodes
Chapter 15. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 15.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 15.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 15.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 15.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support .
[ "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_nodes
Chapter 2. Supported versions of Windows Server
Chapter 2. Supported versions of Windows Server You can establish a trust relationship with Active Directory (AD) forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2012 - Windows Server 2016 Domain functional level range: Windows Server 2012 - Windows Server 2016 Identity Management (IdM) supports establishing a trust with Active Directory domain controllers running the following operating systems: Windows Server 2022 (RHEL 9.1 and later) Windows Server 2019 Windows Server 2016 Windows Server 2012 R2 Windows Server 2012 Important Identity Management (IdM) does not support establishing trust to Active Directory with Active Directory domain controllers running Windows Server 2008 R2 or earlier versions. RHEL IdM requires SMB encryption when establishing the trust relationship, which is only supported in Windows Server 2012 or later.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/supported-versions-of-windows-server_installing-trust-between-idm-and-ad
Chapter 3. Red Hat support for cloud-init
Chapter 3. Red Hat support for cloud-init Red Hat supports the cloud-init utility, cloud-init modules, and default directories and files across various Red Hat products. 3.1. cloud-init significant directories and files By using directories and files in the following table, you can perform tasks such as: Configuring cloud-init Finding information about your configuration after cloud-init has run Examining log files Finding templates Depending on your scenario and datasource, there can be additional files and directories important to your configuration. Table 3.1. cloud-init directories and files Directory or File Description /etc/cloud/cloud.cfg The cloud.cfg file includes the basic cloud-init configuration and lets you know in what phase each module runs. /etc/cloud/cloud.cfg.d The cloud.cfg.d directory is where you can add additional directives for cloud-init . /var/lib/cloud When cloud-init runs, it creates a directory layout under /var/lib/cloud . The layout includes directories and files that give specifics on your instance configuration. /usr/share/doc/cloud-init/examples The examples directory includes multiple examples. You can use them to help model your own directives. /etc/cloud/templates This directory includes templates that you can enable in cloud-init for certain scenarios. The templates provide direction for enabling. /var/log/cloud-init.log The cloud-init.log file provides log information helpful for debugging. /run/cloud-init The /run/cloud-init directory includes logged information about your datasource and the ds-identify script. 3.2. Red Hat products that use cloud-init You can use cloud-init with these Red Hat products: Red Hat Virtualization. Once you install cloud-init on a VM, you can create a template and leverage cloud-init functions for all VMs created from that template. Refer to Using Cloud-Init to Automate the Configuration of Virtual Machines for information about using cloud-init with VMs. Red Hat OpenStack Platform. You can use cloud-init to help configure images for OpenStack. Refer to the Instances and Images Guide for more information. Red Hat Satellite. You can use cloud-init with Red Hat Satellite. Refer to Preparing Cloud-init Images in Red Hat Virtualization for more information. Red Hat OpenShift. You can use cloud-init when you create VMs for OpenShift. Refer to Creating Virtual Machines for more information. 3.3. Red Hat supports these cloud-init modules Red Hat supports most cloud-init modules. Individual modules can contain multiple configuration options. In the following table, you can find all of the cloud-init modules that Red Hat currently supports and provides a brief description and the default module frequency. Refer to Modules in the cloud-init Documentation section for complete descriptions and options for these modules. Table 3.2. Supported cloud-init modules cloud-init Module Description Default Module Frequency bootcmd Runs commands early in the boot process per always ca_certs Adds CA certificates per instance debug Enables or disables output of internal information to assist with debugging per instance disable_ec2_metadata Enables or disables the AWS EC2 metadata per always disk_setup Configures simple partition tables and file systems per instance final_message Specifies the output message once cloud-init completes per always foo Example shows module structure (Module does nothing) per instance growpart Resizes partitions to fill the available disk space per always keys_to_console Allows controls of fingerprints and keys that can be written to the console per instance landscape Installs and configures a landscape client per instance locale Configures the system locale and applies it system-wide per instance mcollective Installs, configures, and starts mcollective per instance migrator Moves old versions of cloud-init to newer versions per always mounts Configures mount points and swap files per instance phone_home Posts data to a remote host after boot completes per instance power_state_change Completes shutdown and reboot after all configuration modules have run per instance puppet Installs and configures puppet per instance resizefs Resizes a file system to use all available space on a partition per always resolv_conf Configures resolv.conf per instance rh_subscription Registers a Red Hat Enterprise Linux system per instance rightscale_userdata Adds support for RightScale configuration hooks to cloud-init per instance rsyslog Configures remote system logging using rsyslog per instance runcmd Runs arbitrary commands per instance salt_minion Installs, configures, and starts salt minion per instance scripts_per_boot Runs per boot scripts per always scripts_per_instance Runs per instance scripts per instance scripts_per_once Runs scripts once per once scripts_user Runs user scripts per instance scripts_vendor Runs vendor scripts per instance seed_random Provides random seed data per instance set_hostname Sets host name and fully qualified domain name (FQDN) per always set_passwords Sets user passwords and enables or disables SSH password authentication per instance ssh_authkey_fingerprints Logs fingerprints of user SSH keys per instance ssh_import_id Imports SSH keys per instance ssh Configures SSH, and host and authorized SSH keys per instance timezone Sets the system time zone per instance update_etc_hosts Updates /etc/hosts per always update_hostname Updates host name and FQDN per always users_groups Configures users and groups per instance write_files Writes arbitrary files per instance yum_add_repo Adds dnf repository configuration to the system per always The following list of modules is not supported by Red Hat: Table 3.3. Modules not supported Module apt_configure apt_pipeline byobu chef emit_upstart grub_dpkg ubuntu_init_switch 3.4. The default cloud.cfg file The /etc/cloud/cloud.cfg file lists the modules comprising the basic configuration for cloud-init . The modules in the file are the default modules for cloud-init . You can configure the modules for your environment or remove modules you do not need. Modules that are included in cloud.cfg do not necessarily do anything by being listed in the file. You need to configure them individually if you want them to perform actions during one of the cloud-init phases. The cloud.cfg file provides the chronology for running individual modules. You can add additional modules to cloud.cfg as long as Red Hat supports the modules you want to add. The default contents of the file for Red Hat Enterprise Linux (RHEL) are as follows: Note Modules run in the order given in cloud.cfg . You typically do not change this order. The cloud.cfg directives can be overridden by user data. When running cloud-init manually, you can override cloud.cfg with command line options. Each module includes its own configuration options, where you can add specific information. To ensure optimal functionality of the configuration, prefer using module names with underscores ( _ ) rather than dashes ( - ). 1 Specifies the default user for the system. Refer to Users and Groups for more information. 2 Enables or disables root login. Refer to Authorized Keys for more information. 3 Specifies whether ssh is configured to accept password authentication. Refer to Set Passwords for more information. 4 Configures mount points; must be a list containing six values. Refer to Mounts for more information. 5 Specifies whether to remove default host SSH keys. Refer to Host Keys for more information. 6 Specifies key types to generate. Refer to Host Keys for more information. Note that for RHEL 8.4 and earlier, the default value of this line is ~ . 7 cloud-init runs at multiple stages of boot. Set this option so that cloud-init can log all stages to its log file. Find more information about this option in the cloud-config.txt file in the usr/share/doc/cloud-init/examples directory. 8 Enables or disables VMware vSphere customization 9 The modules in this section are services that run when the cloud-init service starts, early in the boot process. 10 These modules run during cloud-init configuration, after initial boot. 11 These modules run in the final phase of cloud-init , after the configuration finishes. 12 Specifies details about the default user. Refer to Users and Groups for more information. 13 Specifies the distribution 14 Specifies the main directory that contains cloud-init -specific subdirectories. Refer to Directory layout for more information. 15 Specifies where templates reside 16 The name of the SSH service Additional resources Modules 3.5. The cloud.cfg.d directory cloud-init acts upon directives that you provide and configure. Typically, those directives are included in the cloud.cfg.d directory. Note While you can configure modules by adding user data directives within the cloud.cfg file, as a best practice consider leaving cloud.cfg unmodified. Add your directives to the /etc/cloud/cloud.cfg.d directory. Adding directives to this directory can make future modifications and upgrades easier. There are multiple ways to add directives. You can include directives in a file named *.cfg , which includes the heading #cloud-config . Typically, the directory would contain multiple *cfg files. There are other options for adding directives, for example, you can add a user data script. Refer to User-Data Formats for more information. Additional resources Cloud config examples 3.6. The default 05_logging.cfg file The 05_logging.cfg file sets logging information for cloud-init . The /etc/cloud/cloud.cfg.d directory includes this file, along with other cloud-init directives that you add. cloud-init uses the logging configuration in 05_logging.cfg by default. The default contents of the file for Red Hat Enterprise Linux (RHEL) are as follows: Additional resources Logging 3.7. The cloud-init /var/lib/cloud directory layout When cloud-init first runs, it creates a directory layout that includes information about your instance and cloud-init configuration. The directory can include optional directories, such as /scripts/vendor . The following is a sample directory layout for cloud-init : Additional resources Directory layout
[ "users: 1 - default disable_root: true 2 resize_rootfs_tmp: /dev ssh_pwauth: false 3 mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2'] 4 ssh_deletekeys: true 5 ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519'] 6 syslog_fix_perms: ~ 7 disable_vmware_customization: false 8 cloud_init_modules: 9 - migrator - seed_random - bootcmd - write_files - growpart - resizefs - disk_setup - mounts - set_hostname - update_hostname - update_etc_hosts - ca_certs - rsyslog - users_groups - ssh cloud_config_modules: 10 - ssh_import_id - locale - set_passwords - rh_subscription - spacewalk - yum_add_repo - ntp - timezone - disable_ec2_metadata - runcmd cloud_final_modules: 11 - package_update_upgrade_install - write_files_deferred - puppet - chef - ansible - mcollective - salt_minion - reset_rmc - rightscale_userdata - scripts_vendor - scripts_per_once - scripts_per_boot - scripts_per_instance - scripts_user - ssh_authkey_fingerprints - keys_to_console - install_hotplug - phone_home - final_message - power_state_change system_info: default_user: 12 name: cloud-user lock_passwd: true gecos: Cloud User groups: [adm, systemd-journal] sudo: [\"ALL=(ALL) NOPASSWD:ALL\"] shell: /bin/bash distro: rhel 13 network: renderers: ['sysconfig', 'eni', 'netplan', 'network-manager', 'networkd'] paths: cloud_dir: /var/lib/cloud 14 templates_dir: /etc/cloud/templates 15 ssh_svcname: sshd 16 vim:syntax=yaml", "## This yaml formatted config file handles setting ## logger information. The values that are necessary to be set ## are seen at the bottom. The top '_log' are only used to remove ## redundancy in a syslog and fallback-to-file case. ## ## The 'log_cfgs' entry defines a list of logger configs ## Each entry in the list is tried, and the first one that ## works is used. If a log_cfg list entry is an array, it will ## be joined with '\\n'. _log: - &log_base | [loggers] keys=root,cloudinit [handlers] keys=consoleHandler,cloudLogHandler [formatters] keys=simpleFormatter,arg0Formatter [logger_root] level=DEBUG handlers=consoleHandler,cloudLogHandler [logger_cloudinit] level=DEBUG qualname=cloudinit handlers= propagate=1 [handler_consoleHandler] class=StreamHandler level=WARNING formatter=arg0Formatter args=(sys.stderr,) [formatter_arg0Formatter] format=%(asctime)s - %(filename)s[%(levelname)s]: %(message)s [formatter_simpleFormatter] format=[CLOUDINIT] %(filename)s[%(levelname)s]: %(message)s - &log_file | [handler_cloudLogHandler] class=FileHandler level=DEBUG formatter=arg0Formatter args=('/var/log/cloud-init.log',) - &log_syslog | [handler_cloudLogHandler] class=handlers.SysLogHandler level=DEBUG formatter=simpleFormatter args=(\"/dev/log\", handlers.SysLogHandler.LOG_USER) log_cfgs: Array entries in this list will be joined into a string that defines the configuration. # If you want logs to go to syslog, uncomment the following line. - [ *log_base, *log_syslog ] # The default behavior is to just log to a file. This mechanism that does not depend on a system service to operate. - [ *log_base, *log_file ] A file path can also be used. - /etc/log.conf This tells cloud-init to redirect its stdout and stderr to 'tee -a /var/log/cloud-init-output.log' so the user can see output there without needing to look on the console. output: {all: '| tee -a /var/log/cloud-init-output.log'}", "/var/lib/cloud/ - data/ - instance-id - previous-instance-id - previous-datasource - previous-hostname - result.json - set-hostname - status.json - handlers/ - instance - boot-finished - cloud-config.txt - datasource - handlers/ - obj.pkl - scripts/ - sem/ - user-data.txt - user-data.txt.i - vendor-data.txt - vendor-data.txt.i - instances/ f111ee00-0a4a-4eea-9c17-3fa164739c55/ - boot-finished - cloud-config.txt - datasource - handlers/ - obj.pkl - scripts/ - sem/ - user-data.txt - user-data.txt.i - vendor-data.txt - vendor-data.txt.i - scripts/ - per-boot/ - per-instance/ - per-once/ - vendor/ - seed/ - sem/ - config_scripts_per_once.once" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_cloud-init_for_rhel_9/red-hat-support-for-cloud-init_cloud-content
Chapter 8. Operator SDK
Chapter 8. Operator SDK 8.1. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. See Developing Operators for full documentation on the Operator SDK. Note OpenShift Container Platform 4.7 supports Operator SDK v1.3.0. 8.1.1. Installing the Operator SDK CLI You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.13+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the 4.7.23 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.3.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.3.0-ocp", ... 8.2. Operator SDK CLI reference The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier. Operator SDK CLI syntax USD operator-sdk <command> [<subcommand>] [<argument>] [<flags>] Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. See Developing Operators for full documentation on the Operator SDK. 8.2.1. bundle The operator-sdk bundle command manages Operator bundle metadata. 8.2.1.1. validate The bundle validate subcommand validates an Operator bundle. Table 8.1. bundle validate flags Flag Description -h , --help Help output for the bundle validate subcommand. --index-builder (string) Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker , which is the default, podman , or none . --list-optional List all optional validators available. When set, no validators are run. --select-optional (string) Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators. 8.2.2. cleanup The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command. Table 8.2. cleanup flags Flag Description -h , --help Help output for the run bundle subcommand. --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. n , --namespace (string) If present, namespace in which to run the CLI request. --timeout <duration> Time to wait for the command to complete before failing. The default value is 2m0s . 8.2.3. completion The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier. Table 8.3. completion subcommands Subcommand Description bash Generate bash completions. zsh Generate zsh completions. Table 8.4. completion flags Flag Description -h, --help Usage help output. For example: USD operator-sdk completion bash Example output # bash completion for operator-sdk -*- shell-script -*- ... # ex: ts=4 sw=4 et filetype=sh 8.2.4. create The operator-sdk create command is used to create, or scaffold , a Kubernetes API. 8.2.4.1. api The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command. Table 8.5. create api flags Flag Description -h , --help Help output for the run bundle subcommand. 8.2.5. generate The operator-sdk generate command invokes a specific generator to generate code or manifests. 8.2.5.1. bundle The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project. Note Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence. Table 8.6. generate bundle flags Flag Description --channels (string) Comma-separated list of channels to which the bundle belongs. The default value is alpha . --crds-dir (string) Root directory for CustomResoureDefinition manifests. --default-channel (string) The default channel for the bundle. --deploy-dir (string) Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag. -h , --help Help for generate bundle --input-dir (string) Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory. --kustomize-dir (string) Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests . --manifests Generate bundle manifests. --metadata Generate bundle metadata and Dockerfile. --output-dir (string) Directory to write the bundle to. --overwrite Overwrite the bundle metadata and Dockerfile if they exist. The default value is true . --package (string) Package name for the bundle. -q , --quiet Run in quiet mode. --stdout Write bundle manifest to standard out. --version (string) Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. Additional resources See Bundling an Operator and deploying with Operator Lifecycle Manager for a full procedure that includes using the make bundle command to call the generate bundle subcommand. 8.2.5.2. kustomize The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator. 8.2.5.2.1. manifests The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag. Table 8.7. generate kustomize manifests flags Flag Description --apis-dir (string) Root directory for API type definitions. -h , --help Help for generate kustomize manifests . --input-dir (string) Directory containing existing Kustomize files. --interactive When set to false , if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata. --output-dir (string) Directory where to write Kustomize files. --package (string) Package name. -q , --quiet Run in quiet mode. 8.2.6. init The operator-sdk init command initializes an Operator project and generates, or scaffolds , a default project directory layout for the given plug-in. This command writes the following files: Boilerplate license file PROJECT file with the domain and repository Makefile to build the project go.mod file with project dependencies kustomization.yaml file for customizing manifests Patch file for customizing images for manager manifests Patch file for enabling Prometheus metrics main.go file to run Table 8.8. init flags Flag Description --help, -h Help output for the init command. --plugins (string) Name and optionally version of the plug-in to initialize the project with. Available plug-ins are ansible.sdk.operatorframework.io/v1 , go.kubebuilder.io/v2 , go.kubebuilder.io/v3 , and helm.sdk.operatorframework.io/v1 . --project-version Project version. Available values are 2 and 3-alpha , which is the default. 8.2.7. run The operator-sdk run command provides options that can launch the Operator in various environments. 8.2.7.1. bundle The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM). Table 8.9. run bundle flags Flag Description --index-image (string) Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest . --install-mode <install_mode_value> Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace . --timeout <duration> Install timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. n , --namespace (string) If present, namespace in which to run the CLI request. -h , --help Help output for the run bundle subcommand. Additional resources See Operator group membership for details on possible install modes. 8.2.7.2. bundle-upgrade The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM). Table 8.10. run bundle-upgrade flags Flag Description --timeout <duration> Upgrade timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. n , --namespace (string) If present, namespace in which to run the CLI request. -h , --help Help output for the run bundle subcommand. 8.2.8. scorecard The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely. Table 8.11. scorecard flags Flag Description -c , --config (string) Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml . -h , --help Help output for the scorecard command. --kubeconfig (string) Path to kubeconfig file. -L , --list List which tests are available to run. -n , --namespace (string) Namespace in which to run the test images. -o , --output (string) Output format for results. Available values are text , which is the default, and json . -l , --selector (string) Label selector to determine which tests are run. -s , --service-account (string) Service account to use for tests. The default value is default . -x , --skip-cleanup Disable resource cleanup after tests are run. -w , --wait-time <duration> Seconds to wait for tests to complete, for example 35s . The default value is 30s . Additional resources See Validating Operators using the scorecard tool for details about running the scorecard tool.
[ "tar xvf operator-sdk-v1.3.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.3.0-ocp\",", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cli_tools/operator-sdk
Chapter 16. Guest Virtual Machine Device Configuration
Chapter 16. Guest Virtual Machine Device Configuration Red Hat Enterprise Linux 7 supports three classes of devices for guest virtual machines: Emulated devices are purely virtual devices that mimic real hardware, allowing unmodified guest operating systems to work with them using their standard in-box drivers. Virtio devices (also known as paravirtualized ) are purely virtual devices designed to work optimally in a virtual machine. Virtio devices are similar to emulated devices, but non-Linux virtual machines do not include the drivers they require by default. Virtualization management software like the Virtual Machine Manager ( virt-manager ) and the Red Hat Virtualization Hypervisor install these drivers automatically for supported non-Linux guest operating systems. Red Hat Enterprise Linux 7 supports up to 216 virtio devices. For more information, see Chapter 5, KVM Paravirtualized (virtio) Drivers . Assigned devices are physical devices that are exposed to the virtual machine. This method is also known as passthrough . Device assignment allows virtual machines exclusive access to PCI devices for a range of tasks, and allows PCI devices to appear and behave as if they were physically attached to the guest operating system. Red Hat Enterprise Linux 7 supports up to 32 assigned devices per virtual machine. Device assignment is supported on PCIe devices, including select graphics devices . Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts. Red Hat Enterprise Linux 7 supports PCI hot plug of devices exposed as single-function slots to the virtual machine. Single-function host devices and individual functions of multi-function host devices may be configured to enable this. Configurations exposing devices as multi-function PCI slots to the virtual machine are recommended only for non-hotplug applications. For more information on specific devices and related limitations, see Section 23.17, "Devices" . Note Platform support for interrupt remapping is required to fully isolate a guest with assigned devices from the host. Without such support, the host may be vulnerable to interrupt injection attacks from a malicious guest. In an environment where guests are trusted, the admin may opt-in to still allow PCI device assignment using the allow_unsafe_interrupts option to the vfio_iommu_type1 module. This may either be done persistently by adding a .conf file (for example local.conf ) to /etc/modprobe.d containing the following: or dynamically using the sysfs entry to do the same: 16.1. PCI Devices PCI device assignment is only available on hardware platforms supporting either Intel VT-d or AMD IOMMU. These Intel VT-d or AMD IOMMU specifications must be enabled in the host BIOS for PCI device assignment to function. Procedure 16.1. Preparing an Intel system for PCI device assignment Enable the Intel VT-d specifications The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine. These specifications are required to use PCI device assignment with Red Hat Enterprise Linux. The Intel VT-d specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. The terms used to see these specifications can differ between manufacturers; consult your system manufacturer's documentation for the appropriate terms. Activate Intel VT-d in the kernel Activate Intel VT-d in the kernel by adding the intel_iommu=on and iommu=pt parameters to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in the /etc/sysconfig/grub file. The example below is a modified grub file with Intel VT-d activated. Regenerate config file Regenerate /etc/grub2.cfg by running: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. Procedure 16.2. Preparing an AMD system for PCI device assignment Enable the AMD IOMMU specifications The AMD IOMMU specifications are required to use PCI device assignment in Red Hat Enterprise Linux. These specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. Enable IOMMU kernel support Append iommu=pt to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in /etc/sysconfig/grub so that AMD IOMMU specifications are enabled at boot. Regenerate config file Regenerate /etc/grub2.cfg by running: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. Note For further information on IOMMU, see Appendix E, Working with IOMMU Groups . 16.1.1. Assigning a PCI Device with virsh These steps cover assigning a PCI device to a virtual machine on a KVM hypervisor. This example uses a PCIe network controller with the PCI identifier code, pci_0000_01_00_0 , and a fully virtualized guest machine named guest1-rhel7-64 . Procedure 16.3. Assigning a PCI device to a guest virtual machine with virsh Identify the device First, identify the PCI device designated for device assignment to the virtual machine. Use the lspci command to list the available PCI devices. You can refine the output of lspci with grep . This example uses the Ethernet controller highlighted in the following output: This Ethernet controller is shown with the short identifier 00:19.0 . We need to find out the full identifier used by virsh in order to assign this PCI device to a virtual machine. To do so, use the virsh nodedev-list command to list all devices of a particular type ( pci ) that are attached to the host machine. Then look at the output for the string that maps to the short identifier of the device you wish to use. This example shows the string that maps to the Ethernet controller with the short identifier 00:19.0 . Note that the : and . characters are replaced with underscores in the full identifier. Record the PCI device number that maps to the device you want to use; this is required in other steps. Review device information Information on the domain, bus, and function are available from output of the virsh nodedev-dumpxml command: # virsh nodedev-dumpxml pci_0000_00_19_0 <device> <name>pci_0000_00_19_0</name> <parent>computer</parent> <driver> <name>e1000e</name> </driver> <capability type='pci'> <domain>0</domain> <bus>0</bus> <slot>25</slot> <function>0</function> <product id='0x1502'>82579LM Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device> Figure 16.1. Dump contents Note An IOMMU group is determined based on the visibility and isolation of devices from the perspective of the IOMMU. Each IOMMU group may contain one or more devices. When multiple devices are present, all endpoints within the IOMMU group must be claimed for any device within the group to be assigned to a guest. This can be accomplished either by also assigning the extra endpoints to the guest or by detaching them from the host driver using virsh nodedev-detach . Devices contained within a single group may not be split between multiple guests or split between host and guest. Non-endpoint devices such as PCIe root ports, switch ports, and bridges should not be detached from the host drivers and will not interfere with assignment of endpoints. Devices within an IOMMU group can be determined using the iommuGroup section of the virsh nodedev-dumpxml output. Each member of the group is provided in a separate "address" field. This information may also be found in sysfs using the following: An example of the output from this would be: To assign only 0000.01.00.0 to the guest, the unused endpoint should be detached from the host before starting the guest: Determine required configuration details See the output from the virsh nodedev-dumpxml pci_0000_00_19_0 command for the values required for the configuration file. The example device has the following values: bus = 0, slot = 25 and function = 0. The decimal configuration uses those three values: Add configuration details Run virsh edit , specifying the virtual machine name, and add a device entry in the <devices> section to assign the PCI device to the guest virtual machine. For example: <devices> [...] <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> </hostdev> [...] </devices> Figure 16.2. Add PCI device Alternately, run virsh attach-device , specifying the virtual machine name and the guest's XML file: Note PCI devices may include an optional read-only memory (ROM) module , also known as an option ROM or expansion ROM , for delivering device firmware or pre-boot drivers (such as PXE) for the device. Generally, these option ROMs also work in a virtualized environment when using PCI device assignment to attach a physical PCI device to a VM. However, in some cases, the option ROM can be unnecessary, which may cause the VM to boot more slowly, or the pre-boot driver delivered by the device can be incompatible with virtualization, which may cause the guest OS boot to fail. In such cases, Red Hat recommends masking the option ROM from the VM. To do so: On the host, verify that the device to assign has an expansion ROM base address register (BAR). To do so, use the lspci -v command for the device, and check the output for a line that includes the following: Add the <rom bar='off'/> element as a child of the <hostdev> element in the guest's XML configuration: <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> <rom bar='off'/> </hostdev> Start the virtual machine The PCI device should now be successfully assigned to the virtual machine, and accessible to the guest operating system. 16.1.2. Assigning a PCI Device with virt-manager PCI devices can be added to guest virtual machines using the graphical virt-manager tool. The following procedure adds a Gigabit Ethernet controller to a guest virtual machine. Procedure 16.4. Assigning a PCI device to a guest virtual machine using virt-manager Open the hardware settings Open the guest virtual machine and click the Add Hardware button to add a new device to the virtual machine. Figure 16.3. The virtual machine hardware information window Select a PCI device Select PCI Host Device from the Hardware list on the left. Select an unused PCI device. Note that selecting PCI devices presently in use by another guest causes errors. In this example, a spare audio controller is used. Click Finish to complete setup. Figure 16.4. The Add new virtual hardware wizard Add the new device The setup is complete and the guest virtual machine now has direct access to the PCI device. Figure 16.5. The virtual machine hardware information window Note If device assignment fails, there may be other endpoints in the same IOMMU group that are still attached to the host. There is no way to retrieve group information using virt-manager, but virsh commands can be used to analyze the bounds of the IOMMU group and if necessary sequester devices. See the Note in Section 16.1.1, "Assigning a PCI Device with virsh" for more information on IOMMU groups and how to detach endpoint devices using virsh. 16.1.3. PCI Device Assignment with virt-install It is possible to assign a PCI device when installing a guest using the virt-install command. To do this, use the --host-device parameter. Procedure 16.5. Assigning a PCI device to a virtual machine with virt-install Identify the device Identify the PCI device designated for device assignment to the guest virtual machine. The virsh nodedev-list command lists all devices attached to the system, and identifies each PCI device with a string. To limit output to only PCI devices, enter the following command: Record the PCI device number; the number is needed in other steps. Information on the domain, bus and function are available from output of the virsh nodedev-dumpxml command: <device> <name>pci_0000_01_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>1</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device> Figure 16.6. PCI device file contents Note If there are multiple endpoints in the IOMMU group and not all of them are assigned to the guest, you will need to manually detach the other endpoint(s) from the host by running the following command before you start the guest: See the Note in Section 16.1.1, "Assigning a PCI Device with virsh" for more information on IOMMU groups. Add the device Use the PCI identifier output from the virsh nodedev command as the value for the --host-device parameter. Complete the installation Complete the guest installation. The PCI device should be attached to the guest. 16.1.4. Detaching an Assigned PCI Device When a host PCI device has been assigned to a guest machine, the host can no longer use the device. If the PCI device is in managed mode (configured using the managed='yes' parameter in the domain XML file ), it attaches to the guest machine and detaches from the guest machine and re-attaches to the host machine as necessary. If the PCI device is not in managed mode, you can detach the PCI device from the guest machine and re-attach it using virsh or virt-manager . Procedure 16.6. Detaching a PCI device from a guest with virsh Detach the device Use the following command to detach the PCI device from the guest by removing it in the guest's XML file: Re-attach the device to the host (optional) If the device is in managed mode, skip this step. The device will be returned to the host automatically. If the device is not using managed mode, use the following command to re-attach the PCI device to the host machine: For example, to re-attach the pci_0000_01_00_0 device to the host: The device is now available for host use. Procedure 16.7. Detaching a PCI Device from a guest with virt-manager Open the virtual hardware details screen In virt-manager , double-click the virtual machine that contains the device. Select the Show virtual hardware details button to display a list of virtual hardware. Figure 16.7. The virtual hardware details button Select and remove the device Select the PCI device to be detached from the list of virtual devices in the left panel. Figure 16.8. Selecting the PCI device to be detached Click the Remove button to confirm. The device is now available for host use. 16.1.5. PCI Bridges Peripheral Component Interconnects (PCI) bridges are used to attach to devices such as network cards, modems and sound cards. Just like their physical counterparts, virtual devices can also be attached to a PCI Bridge. In the past, only 31 PCI devices could be added to any guest virtual machine. Now, when a 31st PCI device is added, a PCI bridge is automatically placed in the 31st slot, moving the additional PCI device to the PCI bridge. Each PCI bridge has 31 slots for 31 additional devices, all of which can be bridges. In this manner, over 900 devices can be available for guest virtual machines. For an example of an XML configuration for PCI bridges, see Domain XML example for PCI Bridge . Note that this configuration is set up automatically, and it is not recommended to adjust manually. 16.1.6. PCI Device Assignment Restrictions PCI device assignment (attaching PCI devices to virtual machines) requires host systems to have AMD IOMMU or Intel VT-d support to enable device assignment of PCIe devices. Red Hat Enterprise Linux 7 has limited PCI configuration space access by guest device drivers. This limitation could cause drivers that are dependent on device capabilities or features present in the extended PCI configuration space, to fail configuration. There is a limit of 32 total assigned devices per Red Hat Enterprise Linux 7 virtual machine. This translates to 32 total PCI functions, regardless of the number of PCI bridges present in the virtual machine or how those functions are combined to create multi-function slots. Platform support for interrupt remapping is required to fully isolate a guest with assigned devices from the host. Without such support, the host may be vulnerable to interrupt injection attacks from a malicious guest. In an environment where guests are trusted, the administrator may opt-in to still allow PCI device assignment using the allow_unsafe_interrupts option to the vfio_iommu_type1 module. This may either be done persistently by adding a .conf file (for example local.conf ) to /etc/modprobe.d containing the following: or dynamically using the sysfs entry to do the same:
[ "options vfio_iommu_type1 allow_unsafe_interrupts=1", "echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts", "GRUB_CMDLINE_LINUX=\"rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us USD([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/ rhcrashkernel-param || :) rhgb quiet intel_iommu=on iommu=pt \"", "grub2-mkconfig -o /etc/grub2.cfg", "grub2-mkconfig -o /etc/grub2.cfg", "lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)", "virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_ 00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0", "virsh nodedev-dumpxml pci_0000_00_19_0 <device> <name>pci_0000_00_19_0</name> <parent>computer</parent> <driver> <name>e1000e</name> </driver> <capability type='pci'> <domain>0</domain> <bus>0</bus> <slot>25</slot> <function>0</function> <product id='0x1502'>82579LM Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device>", "ls /sys/bus/pci/devices/ 0000:01:00.0 /iommu_group/devices/", "0000:01:00.0 0000:01:00.1", "virsh nodedev-detach pci_0000_01_00_1", "bus='0' slot='25' function='0'", "virsh edit guest1-rhel7-64", "<devices> [...] <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> </hostdev> [...] </devices>", "virsh attach-device guest1-rhel7-64 file.xml", "Expansion ROM at", "<hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> <rom bar='off'/> </hostdev>", "virsh start guest1-rhel7-64", "lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)", "virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0", "virsh nodedev-dumpxml pci_0000_01_00_0", "<device> <name>pci_0000_01_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>1</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device>", "virsh nodedev-detach pci_0000_00_19_1", "virt-install --name=guest1-rhel7-64 --disk path=/var/lib/libvirt/images/guest1-rhel7-64.img,size=8 --vcpus=2 --ram=2048 --location=http://example1.com/installation_tree/RHEL7.0-Server-x86_64/os --nonetworks --os-type=linux --os-variant=rhel7 --host-device= pci_0000_01_00_0", "virsh detach-device name_of_guest file.xml", "virsh nodedev-reattach device", "virsh nodedev-reattach pci_0000_01_00_0", "options vfio_iommu_type1 allow_unsafe_interrupts=1", "echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-guest_virtual_machine_device_configuration
Chapter 5. Managing users
Chapter 5. Managing users From the Admin Console, you have a wide range of actions you can perform to manage users. 5.1. Creating users You create users in the realm where you intend to have applications needed by those users. Avoid creating users in the master realm, which is only intended for creating other realms. Prerequisite You are in a realm other than the master realm. Procedure Click Users in the menu. Click Add User . Enter the details for the new user. Note Username is the only required field. Click Save . After saving the details, the Management page for the new user is displayed. 5.2. Managing user attributes In Red Hat build of Keycloak a user is associated with a set of attributes. These attributes are used to better describe and identify users within Red Hat build of Keycloak as well as to pass over additional information about them to applications. A user profile defines a well-defined schema for representing user attributes and how they are managed within a realm. By providing a consistent view over user information, it allows administrators to control the different aspects on how attributes are managed as well as to make it much easier to extend Red Hat build of Keycloak to support additional attributes. Although the user profile is mainly targeted for attributes that end-users can manage (e.g.: first and last names, phone, etc) it also serves for managing any other metadata you want to associate with your users. Among other capabilities, user profile enables administrators to: Define a schema for user attributes Define whether an attribute is required based on contextual information (e.g.: if required only for users, or admins, or both, or depending on the scope being requested.) Define specific permissions for viewing and editing user attributes, making possible to adhere to strong privacy requirements where some attributes can not be seen or be changed by third-parties (including administrators) Dynamically enforce user profile compliance so that user information is always updated and in compliance with the metadata and rules associated with attributes Define validation rules on a per-attribute basis by leveraging the built-in validators or writing custom ones Dynamically render forms that users interact with like registration, update profile, brokering, and personal information in the account console, according to the attribute definitions and without any need to manually change themes. Customize user management interfaces in the administration console so that attributes are rendered dynamically based on the user profile schema The user profile schema or configuration uses a JSON format to represent attributes and their metadata. From the administration console, you are able to manage the configuration by clicking on the Realm Settings on the left side menu and then clicking on the User Profile tab on that page. In the sections, we'll be looking at how to create your own user profile schema or configuration, and how to manage attributes. 5.2.1. Understanding the Default Configuration By default, Red Hat build of Keycloak provides a basic user profile configuration covering some of the most common user attributes: Name Description username The username email End-User's preferred e-mail address. firstName Given name(s) or first name(s) of the end-user lastName Surname(s) or last name(s) of the End-User In Red Hat build of Keycloak, both username and email attributes have a special handling as they are often used to identify, authenticate, and link user accounts. For those attributes, you are limited to changing their settings, and you can not remove them. Note The behavior of both username and email attributes changes accordingly to the Login settings of your realm. For instance, changing the Email as username or the Edit username settings will override any configuration you have set in the user profile configuration. As you will see in the following sections, you are free to change the default configuration by bringing your own attributes or changing the settings for any of the available attributes to better fit it to your needs. 5.2.2. Understanding the User Profile Contexts In Red Hat build of Keycloak, users are managed through different contexts: Registration Update Profile Reviewing Profile when authenticating through a broker or social provider Account Console Administrative (e.g.: administration console and Admin REST API) Except for the Administrative context, all other contexts are considered end-user contexts as they are related to user self-service flows. Knowing these contexts is important to understand where your user profile configuration will take effect when managing users. Regardless of the context where the user is being managed, the same user profile configuration will be used to render UIs and validate attribute values. As you will see in the following sections, you might restrict certain attributes to be available only from the administrative context and disable them completely for end-users. The other way around is also true if you don't want administrators to have access to certain user attributes but only the end-user. 5.2.3. Understanding Managed and Unmanaged Attributes By default, Red Hat build of Keycloak will only recognize the attributes defined in your user profile configuration. The server ignores any other attribute not explicitly defined there. By being strict about which user attributes can be set to your users, as well as how their values are validated, Red Hat build of Keycloak can add another defense barrier to your realm and help you to prevent unexpected attributes and values associated to your users. That said, user attributes can be categorized as follows: Managed . These are attributes controlled by your user profile, to which you want to allow end-users and administrators to manage from any user profile context. For these attributes, you want complete control on how and when they are managed. Unmanaged . These are attributes you do not explicitly define in your user profile so that they are completely ignored by Red Hat build of Keycloak, by default. Although unmanaged attributes are disabled by default, you can configure your realm using different policies to define how they are handled by the server. For that, click on the Realm Settings at the left side menu, click on the General tab, and then choose any of the following options from the Unmanaged Attributes setting: Disabled . This is the default policy so that unmanaged attributes are disabled from all user profile contexts. Enabled . This policy enables unmanaged attributes to all user profile contexts. Admin can view . This policy enables unmanaged attributes only from the administrative context as read-only. Admin can edit . This policy enables unmanaged attributes only from the administrative context for reads and writes. These policies give you a fine-grained control over how the server will handle unmanaged attributes. You can choose to completely disable or only support unmanaged attributes when managing users through the administrative context. When unmanaged attributes are enabled (even if partially) you can manage them from the administration console at the Attributes tab in the User Details UI. If the policy is set to Disabled this tab is not available. As a security recommendation, try to adhere to the most strict policy as much as possible (e.g.: Disabled or Admin can edit ) to prevent unexpected attributes (and values) set to your users when they are managing their profile through end-user contexts. Avoid setting the Enabled policy and prefer defining all the attributes that end-users can manage in your user profile configuration, under your control. Note The Enabled policy is targeted for realms migrating from versions of Red Hat build of Keycloak and to avoid breaking behavior when using custom themes and extending the server with their own custom user attributes. As you will see in the following sections, you can also restrict the audience for an attribute by choosing if it should be visible or writable by users and/or administrators. For unmanaged attributes, the maximum length is 2048 characters. To specify a different minimum or maximum length, change the unmanaged attribute to a managed attribute and add a length validator. Warning Red Hat build of Keycloak caches user-related objects in its internal caches. The longer the attributes are, the more memory the cache consumes. Therefore, limiting the size of the length attributes is recommended. Consider storing large objects outside Red Hat build of Keycloak and reference them by ID or URL. 5.2.4. Managing the User Profile The user profile configuration is managed on a per-realm basis. For that, click on the Realm Settings link on the left side menu and then click on the User Profile tab. User Profile Tab In the Attributes sub-tab you have a list of all managed attributes. In the Attribute Groups sub-tab you can manage attribute groups. An attribute group allows you to correlate attributes so that they are displayed together when rendering user facing forms. In the JSON Editor sub-tab you can view and edit the JSON configuration. You can use this tab to grab your current configuration or manage it manually. Any change you make to this tab is reflected in the other tabs, and vice-versa. In the section, you are going to learn how to manage attributes. 5.2.5. Managing Attributes At the Attributes sub-tab you can create, edit, and delete the managed attributes. To define a new attribute and associate it with the user profile, click on the Create attribute button at the top of the attribute listing. Attribute Configuration When configuring the attribute you can define the following settings: Name The name of the attribute, used to uniquely identify an attribute. Display name A user-friendly name for the attribute, mainly used when rendering user-facing forms. It also supports Using Internationalized Messages Multivalued If enabled, the attribute supports multiple values and UIs are rendered accordingly to allow setting many values. When enabling this setting, make sure to add a validator to set a hard limit to the number of values. Attribute Group The attribute group to which the attribute belongs to, if any. Enabled when Enables or disables an attribute. If set to Always , the attribute is available from any user profile context. If set to Scopes are requested , the attribute is only available when the client acting on behalf of the user is requesting a set of one or more scopes. You can use this option to dynamically enforce certain attributes depending on the client scopes being requested. For the account and administration consoles, scopes are not evaluated and the attribute is always enabled. That is because filtering attributes by scopes only works when running authentication flows. Required Set the conditions to mark an attribute as required. If disabled, the attribute is optional. If enabled, you can set the Required for setting to mark the attribute as required depending on the user profile context so that the attribute is required for end-users (via end-user contexts) or to administrators (via administrative context), or both. You can also set the Required when setting to mark the attribute as required only when a set of one or more client scopes are requested. If set to Always , the attribute is required from any user profile context. If set to Scopes are requested , the attribute is only required when the client acting on behalf of the user is requesting a set of one or more scopes. For the account and administration consoles, scopes are not evaluated and the attribute is not required. That is because filtering attributes by scopes only works when running authentication flows. Permission In this section, you can define read and write permissions when the attribute is being managed from an end-user or administrative context. The Who can edit setting mark an attribute as writable by User and/or Admin , from an end-user and administrative context, respectively. The Who can view setting mark an attribute as read-only by User and/or Admin from an end-user and administrative context, respectively. Validation In this section, you can define the validations that will be performed when managing the attribute value. Red Hat build of Keycloak provides a set of built-in validators you can choose from with the possibility to add your own. For more details, look at the Validating Attributes section. Annotation In this section, you can associate annotations to the attribute. Annotations are mainly useful to pass over additional metadata to frontends for rendering purposes. For more details, look at the Defining UI Annotations section. When you create an attribute, the attribute is only available from administrative contexts to avoid unexpectedly exposing attributes to end-users. Effectively, the attribute won't be accessible to end-users when they are managing their profile through the end-user contexts. You can change the Permissions settings anytime accordingly to your needs. 5.2.6. Validating Attributes You can enable validation to managed attributes to make sure the attribute value conforms to specific rules. For that, you can add or remove validators from the Validations settings when managing an attribute. Attribute Validation Validation happens at any time when writing to an attribute, and they can throw errors that will be shown in UIs when the value fails a validation. For security reasons, every attribute that is editable by users should have a validation to restrict the size of the values users enter. If no length validator has been specified, Red Hat build of Keycloak defaults to a maximum length of 2048 characters. 5.2.6.1. Built-in Validators Red Hat build of Keycloak provides some built-in validators that you can choose from, and you are also able to provide your own validators by extending the Validator SPI . The list below provides a list of all the built-in validators: Name Description Configuration length Check the length of a string value based on a minimum and maximum length. min : an integer to define the minimum allowed length. max : an integer to define the maximum allowed length. trim-disabled : a boolean to define whether the value is trimmed prior to validation. integer Check if the value is an integer and within a lower and/or upper range. If no range is defined, the validator only checks whether the value is a valid number. min : an integer to define the lower range. max : an integer to define the upper range. double Check if the value is a double and within a lower and/or upper range. If no range is defined, the validator only checks whether the value is a valid number. min : an integer to define the lower range. max : an integer to define the upper range. uri Check if the value is a valid URI. None pattern Check if the value matches a specific RegEx pattern. pattern : the RegEx pattern to use when validating values. error-message : the key of the error message in i18n bundle. If not set a generic message is used. email Check if the value has a valid e-mail format. max-local-length : an integer to define the maximum length for the local part of the email. It defaults to 64 per specification. local-date Check if the value has a valid format based on the realm and/or user locale. None person-name-prohibited-characters Check if the value is a valid person name as an additional barrier for attacks such as script injection. The validation is based on a default RegEx pattern that blocks characters not common in person names. error-message : the key of the error message in i18n bundle. If not set a generic message is used. username-prohibited-characters Check if the value is a valid username as an additional barrier for attacks such as script injection. The validation is based on a default RegEx pattern that blocks characters not common in usernames. error-message : the key of the error message in i18n bundle. If not set a generic message is used. options Check if the value is from the defined set of allowed values. Useful to validate values entered through select and multiselect fields. options : array of strings containing allowed values. up-username-not-idn-homograph The field can contain only latin characters and common unicode characters. Useful for the fields, which can be subject of IDN homograph attacks (typically username). error-message : the key of the error message in i18n bundle. If not set a generic message is used. multivalued Validates the size of a multivalued attribute. min : an integer to define the minimum allowed count of attribute values. max : an integer to define the maximum allowed count of attribute values. 5.2.7. Defining UI Annotations In order to pass additional information to frontends, attributes can be decorated with annotations to dictate how attributes are rendered. This capability is mainly useful when extending Red Hat build of Keycloak themes to render pages dynamically based on the annotations associated with attributes. Annotations are used, for example, for Changing the HTML type for an Attribute and Changing the DOM representation of an Attribute, as you will see in the following sections. Attribute Annotation An annotation is a key/value pair shared with the UI so that they can change how the HTML element corresponding to the attribute is rendered. You can set any annotation you want to an attribute as long as the annotation is supported by the theme your realm is using. Note The only restriction you have is to avoid using annotations using the kc prefix in their keys because these annotations using this prefix are reserved for Red Hat build of Keycloak. 5.2.7.1. Built-in Annotations The following annotations are supported by Red Hat build of Keycloak built-in themes: Name Description inputType Type of the form input field. Available types are described in a table below. inputHelperTextBefore Helper text rendered before (above) the input field. Direct text or internationalization pattern (like USD{i18n.key} ) can be used here. Text is NOT html escaped when rendered into the page, so you can use html tags here to format the text, but you also have to correctly escape html control characters. inputHelperTextAfter Helper text rendered after (under) the input field. Direct text or internationalization pattern (like USD{i18n.key} ) can be used here. Text is NOT html escaped when rendered into the page, so you can use html tags here to format the text, but you also have to correctly escape html control characters. inputOptionsFromValidation Annotation for select and multiselect types. Optional name of custom attribute validation to get input options from. See detailed description below. inputOptionLabelsI18nPrefix Annotation for select and multiselect types. Internationalization key prefix to render options in UI. See detailed description below. inputOptionLabels Annotation for select and multiselect types. Optional map to define UI labels for options (directly or using internationalization). See detailed description below. inputTypePlaceholder HTML input placeholder attribute applied to the field - specifies a short hint that describes the expected value of an input field (e.g. a sample value or a short description of the expected format). The short hint is displayed in the input field before the user enters a value. inputTypeSize HTML input size attribute applied to the field - specifies the width, in characters, of a single line input field. For fields based on HTML select type it specifies number of rows with options shown. May not work, depending on css in used theme! inputTypeCols HTML input cols attribute applied to the field - specifies the width, in characters, for textarea type. May not work, depending on css in used theme! inputTypeRows HTML input rows attribute applied to the field - specifies the height, in characters, for textarea type. For select fields it specifies number of rows with options shown. May not work, depending on css in used theme! inputTypePattern HTML input pattern attribute applied to the field providing client side validation - specifies a regular expression that an input field's value is checked against. Useful for single line inputs. inputTypeMaxLength HTML input maxlength attribute applied to the field providing client side validation - maximal length of the text which can be entered into the input field. Useful for text fields. inputTypeMinLength HTML input minlength attribute applied to the field providing client side validation - minimal length of the text which can be entered into the input field. Useful for text fields. inputTypeMax HTML input max attribute applied to the field providing client side validation - maximal value which can be entered into the input field. Useful for numeric fields. inputTypeMin HTML input min attribute applied to the field providing client side validation - minimal value which can be entered into the input field. Useful for numeric fields. inputTypeStep HTML input step attribute applied to the field - Specifies the interval between legal numbers in an input field. Useful for numeric fields. Number Format If set, the data-kcNumberFormat attribute is added to the field to format the value based on a given format. This annotation is targeted for numbers where the format is based on the number of digits expected in a determined position. For instance, a format ({2}) {5}-{4} will format the field value to (00) 00000-0000 . Number UnFormat If set, the data-kcNumberUnFormat attribute is added to the field to format the value based on a given format before submitting the form. This annotation is useful if you do not want to store any format for a specific attribute but only format the value on the client side. For instance, if the current value is (00) 00000-0000 , the value will change to 00000000000 if you set the value {11} to this annotation or any other format you want by specifying a set of one or ore group of digits. Make sure to add validators to perform server-side validations before storing values. Note Field types use HTML form field tags and attributes applied to them - they behave based on the HTML specifications and browser support for them. Visual rendering also depends on css styles applied in the used theme. 5.2.7.2. Changing the HTML type for an Attribute You can change the type of a HTML5 input element by setting the inputType annotation. The available types are: Name Description HTML tag used text Single line text input. input textarea Multiple line text input. textarea select Common single select input. See description how to configure options below. select select-radiobuttons Single select input through group of radio buttons. See description how to configure options below. group of input multiselect Common multiselect input. See description how to configure options below. select multiselect-checkboxes Multiselect input through group of checkboxes. See description how to configure options below. group of input html5-email Single line text input for email address based on HTML 5 spec. input html5-tel Single line text input for phone number based on HTML 5 spec. input html5-url Single line text input for URL based on HTML 5 spec. input html5-number Single line input for number (integer or float depending on step ) based on HTML 5 spec. input html5-range Slider for number entering based on HTML 5 spec. input html5-datetime-local Date Time input based on HTML 5 spec. input html5-date Date input based on HTML 5 spec. input html5-month Month input based on HTML 5 spec. input html5-week Week input based on HTML 5 spec. input html5-time Time input based on HTML 5 spec. input 5.2.7.3. Defining options for select and multiselect fields Options for select and multiselect fields are taken from validation applied to the attribute to be sure validation and field options presented in UI are always consistent. By default, options are taken from built-in options validation. You can use various ways to provide nice human-readable labels for select and multiselect options. The simplest case is when attribute values are same as UI labels. No extra configuration is necessary in this case. Option values same as UI labels When attribute value is kind of ID not suitable for UI, you can use simple internationalization support provided by inputOptionLabelsI18nPrefix annotation. It defines prefix for internationalization keys, option value is dot appended to this prefix. Simple internationalization for UI labels using i18n key prefix Localized UI label texts for option value have to be provided by userprofile.jobtitle.sweng and userprofile.jobtitle.swarch keys then, using common localization mechanism. You can also use inputOptionLabels annotation to provide labels for individual options. It contains a map of labels for option - key in the map is option value (defined in validation), and value in the map is UI label text itself or its internationalization pattern (like USD{i18n.key} ) for that option. Note You have to use User Profile JSON Editor to enter map as inputOptionLabels annotation value. Example of directly entered labels for individual options without internationalization: "attributes": [ <... { "name": "jobTitle", "validations": { "options": { "options":[ "sweng", "swarch" ] } }, "annotations": { "inputType": "select", "inputOptionLabels": { "sweng": "Software Engineer", "swarch": "Software Architect" } } } ... ] Example of the internationalized labels for individual options: "attributes": [ ... { "name": "jobTitle", "validations": { "options": { "options":[ "sweng", "swarch" ] } }, "annotations": { "inputType": "select-radiobuttons", "inputOptionLabels": { "sweng": "USD{jobtitle.swengineer}", "swarch": "USD{jobtitle.swarchitect}" } } } ... ] Localized texts have to be provided by jobtitle.swengineer and jobtitle.swarchitect keys then, using common localization mechanism. Custom validator can be used to provide options thanks to inputOptionsFromValidation attribute annotation. This validation have to have options config providing array of options. Internationalization works the same way as for options provided by built-in options validation. Options provided by custom validator 5.2.7.4. Changing the DOM representation of an Attribute You can enable additional client-side behavior by setting annotations with the kc prefix. These annotations are going to translate into an HTML attribute in the corresponding element of an attribute, prefixed with data- , and a script with the same name will be loaded to the dynamic pages so that you can select elements from the DOM based on the custom data- attribute and decorate them accordingly by modifying their DOM representation. For instance, if you add a kcMyCustomValidation annotation to an attribute, the HTML attribute data-kcMyCustomValidation is added to the corresponding HTML element for the attribute, and a JavaScript module is loaded from your custom theme at <THEME TYPE>/resources/js/kcMyCustomValidation.js . See the Server Developer Guide for more information about how to deploy a custom JavaScript module to your theme. The JavaScript module can run any code to customize the DOM and the elements rendered for each attribute. For that, you can use the userProfile.js module to register an annotation descriptor for your custom annotation as follows: import { registerElementAnnotatedBy } from "./userProfile.js"; registerElementAnnotatedBy({ name: 'kcMyCustomValidation', onAdd(element) { var listener = function (event) { // do something on keyup }; element.addEventListener("keyup", listener); // returns a cleanup function to remove the event listener return () => element.removeEventListener("keyup", listener); } }); The registerElementAnnotatedBy is a method to register annotation descriptors. A descriptor is an object with a name , referencing the annotation name, and a onAdd function. Whenever the page is rendered or an attribute with the annotation is added to the DOM, the onAdd function is invoked so that you can customize the behavior for the element. The onAdd function can also return a function to perform a cleanup. For instance, if you are adding event listeners to elements, you might want to remove them in case the element is removed from the DOM. Alternatively, you can also use any JavaScript code you want if the userProfile.js is not enough for your needs: document.querySelectorAll(`[data-kcMyCustomValidation]`).forEach((element) => { var listener = function (evt) { // do something on keyup }; element.addEventListener("keyup", listener); }); 5.2.8. Managing Attribute Groups At the Attribute Groups sub-tab you can create, edit, and delete attribute groups. An attribute group allows you to define a container for correlated attributes so that they are rendered together when at the user-facing forms. Attribute Group List Note You can't delete attribute groups that are bound to attributes. For that, you should first update the attributes to remove the binding. To create a new group, click on the Create attributes group button on the top of the attribute groups listing. Attribute Group Configuration When configuring the group you can define the following settings: Name The name of the attribute, used to uniquely identify an attribute. Display name A user-friendly name for the attribute, mainly used when rendering user-facing forms. It also supports Using Internationalized Messages Display description A user-friendly text that will be displayed as a tooltip when rendering user-facing forms. It also supports Using Internationalized Messages Annotation In this section, you can associate annotations to the attribute. Annotations are mainly useful to pass over additional metadata to frontends for rendering purposes. 5.2.9. Using the JSON configuration The user profile configuration is stored using a well-defined JSON schema. You can choose from editing the user profile configuration directly by clicking on the JSON Editor sub-tab. JSON Configuration The JSON schema is defined as follows: { "unmanagedAttributePolicy": "DISABLED", "attributes": [ { "name": "myattribute", "multivalued": false, "displayName": "My Attribute", "group": "personalInfo", "required": { "roles": [ "user", "admin" ], "scopes": [ "foo", "bar" ] }, "permissions": { "view": [ "admin", "user" ], "edit": [ "admin", "user" ] }, "validations": { "email": { "max-local-length": 64 }, "length": { "max": 255 } }, "annotations": { "myannotation": "myannotation-value" } } ], "groups": [ { "name": "personalInfo", "displayHeader": "Personal Information", "annotations": { "foo": ["foo-value"], "bar": ["bar-value"] } } ] } The schema supports as many attributes and groups as you need. The unmanagedAttributePolicy property defines the unmanaged attribute policy by setting one of following values. For more details, look at the Understanding Managed and Unmanaged Attributes. DISABLED ENABLED ADMIN_VIEW ADMIN_EDIT 5.2.9.1. Attribute Schema For each attribute you should define a name and, optionally, the required , permission , and the annotations settings. The required property defines whether an attribute is required. Red Hat build of Keycloak allows you to set an attribute as required based on different conditions. When the required property is defined as an empty object, the attribute is always required. { "attributes": [ { "name": "myattribute", "required": {} ] } On the other hand, you can choose to make the attribute required only for users, or administrators, or both. As well as mark the attribute as required only in case a specific scope is requested when the user is authenticating in Red Hat build of Keycloak. To mark an attribute as required for a user and/or administrator, set the roles property as follows: { "attributes": [ { "name": "myattribute", "required": { "roles": ["user"] } ] } The roles property expects an array whose values can be either user or admin , depending on whether the attribute is required by the user or the administrator, respectively. Similarly, you can choose to make the attribute required when a set of one or more scopes is requested by a client when authenticating a user. For that, you can use the scopes property as follows: { "attributes": [ { "name": "myattribute", "required": { "scopes": ["foo"] } ] } The scopes property is an array whose values can be any string representing a client scope. The attribute-level permissions property can be used to define the read and write permissions to an attribute. The permissions are set based on whether these operations can be performed on the attribute by a user, or administrator, or both. { "attributes": [ { "name": "myattribute", "permissions": { "view": ["admin"], "edit": ["user"] } ] } Both view and edit properties expect an array whose values can be either user or admin , depending on whether the attribute is viewable or editable by the user or the administrator, respectively. When the edit permission is granted, the view permission is implicitly granted. The attribute-level annotation property can be used to associate additional metadata to attributes. Annotations are mainly useful for passing over additional information about attributes to frontends rendering user attributes based on the user profile configuration. Each annotation is a key/value pair. { "attributes": [ { "name": "myattribute", "annotations": { "foo": ["foo-value"], "bar": ["bar-value"] } ] } 5.2.9.2. Attribute Group Schema For each attribute group you should define a name and, optionally, the annotations settings. The attribute-level annotation property can be used to associate additional metadata to attributes. Annotations are mainly useful for passing over additional information about attributes to frontends rendering user attributes based on the user profile configuration. Each annotation is a key/value pair. 5.2.10. Customizing How UIs are Rendered The UIs from all the user profile contexts (including the administration console) are rendered dynamically accordingly to your user profile configuration. The default rendering mechanism provides the following capabilities: Show or hide fields based on the permissions set to attributes. Render markers for required fields based on the constraints set to the attributes. Change the field input type (text, date, number, select, multiselect) set to an attribute. Mark fields as read-only depending on the permissions set to an attribute. Order fields depending on the order set to the attributes. Group fields that belong to the same attribute group. Dynamically group fields that belong to the same attribute group. 5.2.10.1. Ordering attributes The attribute order is set by dragging and dropping the attribute rows on the attribute listing page. Ordering Attributes The order you set in this page is respected when fields are rendered in dynamic forms. 5.2.10.2. Grouping attributes When dynamic forms are rendered, they will try to group together attributes that belong to the same attribute group. Dynamic Update Profile Form Note When attributes are linked to an attribute group, the attribute order is also important to make sure attributes within the same group are close together, within a same group header. Otherwise, if attributes within a group do not have a sequential order you might have the same group header rendered multiple times in the dynamic form. 5.2.11. Enabling Progressive Profiling In order to make sure end-user profiles are in compliance with the configuration, administrators can use the VerifyProfile required action to eventually force users to update their profiles when authenticating to Red Hat build of Keycloak. Note The VerifyProfile action is similar to the UpdateProfile action. However, it leverages all the capabilities provided by the user profile to automatically enforce compliance with the user profile configuration. When enabled, the VerifyProfile action is going to perform the following steps when the user is authenticating: Check whether the user profile is fully compliant with the user profile configuration set to the realm. That means running validations and make sure all of them are successful. If not, perform an additional step during the authentication so that the user can update any missing or invalid attribute. If the user profile is compliant with the configuration, no additional step is performed, and the user continues with the authentication process. The VerifyProfile action is enabled by default. To disable it, click on the Authentication link on the left side menu and then click on the Required Actions tab. At this tab, use the Enabled switch of the VerifyProfile action to disable it. Registering the VerifyProfile Required Action 5.2.12. Using Internationalized Messages If you want to use internationalized messages when configuring attributes, attributes groups, and annotations, you can set their display name, description, and values, using a placeholder that will translate to a message from a message bundle. For that, you can use a placeholder to resolve messages keys such as USD{myAttributeName} , where myAttributeName is the key for a message in a message bundle. For more details, look at Server Developer Guide about how to add message bundles to custom themes. 5.3. Defining user credentials You can manage credentials of a user in the Credentials tab. Credential management You change the priority of credentials by dragging and dropping rows. The new order determines the priority of the credentials for that user. The topmost credential has the highest priority. The priority determines which credential is displayed first after a user logs in. Type This column displays the type of credential, for example password or OTP . User Label This is an assignable label to recognize the credential when presented as a selection option during login. It can be set to any value to describe the credential. Data This is the non-confidential technical information about the credential. It is hidden, by default. You can click Show data... to display the data for a credential. Actions Click Reset password to change the password for the user and Delete to remove the credential. You cannot configure other types of credentials for a specific user in the Admin Console; that task is the user's responsibility. You can delete the credentials of a user in the event a user loses an OTP device or if credentials have been compromised. You can only delete credentials of a user in the Credentials tab. 5.3.1. Setting a password for a user If a user does not have a password, or if the password has been deleted, the Set Password section is displayed. If a user already has a password, it can be reset in the Reset Password section. Procedure Click Users in the menu. The Users page is displayed. Select a user. Click the Credentials tab. Type a new password in the Set Password section. Click Set Password . Note If Temporary is ON , the user must change the password at the first login. To allow users to keep the password supplied, set Temporary to OFF. The user must click Set Password to change the password. 5.3.2. Requesting a user reset a password You can also request that the user reset the password. Procedure Click Users in the menu. The Users page is displayed. Select a user. Click the Credentials tab. Click Credential Reset . Select Update Password from the list. Click Send Email . The sent email contains a link that directs the user to the Update Password window. Optionally, you can set the validity of the email link. This is set to the default preset in the Tokens tab in Realm Settings . 5.3.3. Creating an OTP If OTP is conditional in your realm, the user must navigate to Red Hat build of Keycloak Account Console to reconfigure a new OTP generator. If OTP is required, then the user must reconfigure a new OTP generator when logging in. Alternatively, you can send an email to the user that requests the user reset the OTP generator. The following procedure also applies if the user already has an OTP credential. Prerequisite You are logged in to the appropriate realm. Procedure Click Users in the main menu. The Users page is displayed. Select a user. Click the Credentials tab. Click Credential Reset . Set Reset Actions to Configure OTP . Click Send Email . The sent email contains a link that directs the user to the OTP setup page . 5.4. Allowing users to self-register You can use Red Hat build of Keycloak as a third-party authorization server to manage application users, including users who self-register. If you enable self-registration, the login page displays a registration link so that user can create an account. Registration link A user must add profile information to the registration form to complete registration. The registration form can be customized by removing or adding the fields that must be completed by a user. Clarification on identity brokering and admin API Even when self-registrations is disabled, new users can be still added to Red Hat build of Keycloak by either: Administrator can add new users with the usage of admin console (or admin REST API) When identity brokering is enabled, new users authenticated by identity provider may be automatically added/registered in Red Hat build of Keycloak storage. See the First login flow section in the Identity Brokering chapter for more information. Also users coming from the 3rd-party user storage (for example LDAP) are automatically available in Red Hat build of Keycloak when the particular user storage is enabled Additional resources For more information on customizing user registration, see the Server Developer Guide . 5.4.1. Enabling user registration Enable users to self-register. Procedure Click Realm Settings in the main menu. Click the Login tab. Toggle User Registration to ON . After you enable this setting, a Register link displays on the login page of the Admin Console. 5.4.2. Registering as a new user As a new user, you must complete a registration form to log in for the first time. You add profile information and a password to register. Registration form Prerequisite User registration is enabled. Procedure Click the Register link on the login page. The registration page is displayed. Enter the user profile information. Enter the new password. Click Register . 5.4.3. Requiring user to agree to terms and conditions during registration For a user to register, you can require agreement to your terms and conditions. Registration form with required terms and conditions agreement Prerequisite User registration is enabled. Terms and conditions required action is enabled. Procedure Click Authentication in the menu. Click the Flows tab. Click the registration flow. Select Required on the Terms and Conditions row. Make the terms and conditions agreement required at registration 5.5. Defining actions required at login You can set the actions that a user must perform at the first login. These actions are required after the user provides credentials. After the first login, these actions are no longer required. You add required actions on the Details tab of that user. Some required actions are automatically triggered for the user during login even if they are not explicitly added to this user by the administrator. For example Update password action can be triggered if Password policies are configured in a way that the user password needs to be changed every X days. Or verify profile action can require the user to update the User profile as long as some user attributes do not match the requirements according to the user profile configuration. The following are examples of required action types: Update Password The user must change their password. Configure OTP The user must configure a one-time password generator on their mobile device using either the Free OTP or Google Authenticator application. Verify Email The user must verify their email account. An email will be sent to the user with a validation link that they must click. Once this workflow is successfully completed, the user will be allowed to log in. Update Profile The user must update profile information, such as name, address, email, and phone number. Note Some actions do not makes sense to be added to the user account directly. For example, the Update User Locale is a helper action to handle some localization related parameters. Another example is the Delete Credential action, which is supposed to be triggered as a Parameterized AIA . Regarding this one, if the administrator wants to delete the credential of some user, that administrator can do it directly in the Admin Console. The Delete Credential action is dedicated to be used for example by the Red Hat build of Keycloak Account Console . 5.5.1. Setting required actions for one user You can set the actions that are required for any user. Procedure Click Users in the menu. Select a user from the list. Navigate to the Required User Actions list. Select all the actions you want to add to the account. Click the X to the action name to remove it. Click Save after you select which actions to add. 5.5.2. Setting required actions for all users You can specify what actions are required before the first login of all new users. The requirements apply to a user created by the Add User button on the Users page or the Register link on the login page. Procedure Click Authentication in the menu. Click the Required Actions tab. Click the checkbox in the Set as default action column for one or more required actions. When a new user logs in for the first time, the selected actions must be executed. 5.5.3. Enabling terms and conditions as a required action You can enable a required action that new users must accept the terms and conditions before logging in to Red Hat build of Keycloak for the first time. Procedure Click Authentication in the menu. Click the Required Actions tab. Enable the Terms and Conditions action. Edit the terms.ftl file in the base login theme. Additional resources For more information on extending and creating themes, see the Server Developer Guide . 5.6. Application initiated actions Application initiated actions (AIA) allow client applications to request a user to perform an action on the Red Hat build of Keycloak side. Usually, when an OIDC client application wants a user to log in, it redirects that user to the login URL as described in the OIDC section . After login, the user is redirected back to the client application. The user performs the actions that were required by the administrator as described in the section and then is immediately redirected back to the application. However, AIA allows the client application to request some required actions from the user during login. This can be done even if the user is already authenticated on the client and has an active SSO session. It is triggered by adding the kc_action parameter to the OIDC login URL with the value containing the requested action. For instance kc_action=UPDATE_PASSWORD parameter. Note The kc_action parameter is a Red Hat build of Keycloak proprietary mechanism unsupported by the OIDC specification. Note Application initiated actions are supported only for OIDC clients. So if AIA is used, an example flow is similar to the following: A client application redirects the user to the OIDC login URL with the additional parameter such as kc_action=UPDATE_PASSWORD There is a browser flow always triggered as described in the Authentication flows section . If the user was not authenticated, that user needs to authenticate as during normal login. In case the user was already authenticated, that user might be automatically re-authenticated by an SSO cookie without needing to actively re-authenticate and supply the credentials again. In this case, that user will be directly redirected to the screen with the particular action (update password in this case). However, in some cases, active re-authentication is required even if the user has an SSO cookie (See below for the details). The screen with particular action (in this case update password ) is displayed to the user, so that user needs to perform a particular action Then user is redirected back to the client application Note that AIA are used by the Red Hat build of Keycloak Account Console to request update password or to reset other credentials such as OTP or WebAuthn. Warning Even if the parameter kc_action was used, it is not sufficient to assume that the user always performs the action. For example, a user could have manually deleted the kc_action parameter from the browser URL. Therefore, no guarantee exists that the user has an OTP for the account after the client requested kc_action=CONFIGURE_TOTP . If you want to verify that the user configured two-factor authenticator, the client application may need to check it was configured. For instance by checking the claims like acr in the tokens. 5.6.1. Re-authentication during AIA In case the user is already authenticated due to an active SSO session, that user usually does not need to actively re-authenticate. However, if that user actively authenticated longer than five minutes ago, the client can still request re-authentication when some AIA is requested. Exceptions exist from this guideline as follows: The action delete_account will always require the user to actively re-authenticate The action update_password might require the user to actively re-authenticate according to the configured Maximum Authentication Age Password policy . In case the policy is not configured, it is also possible to configure it on the required action itself in the Required actions tab when configuring the particular required action. If the policy is not configured in any of those places, it defaults to five minutes. If you want to use a shorter re-authentication, you can still use a parameter query parameter such as max_age with the specified shorter value or eventually prompt=login , which will always require user to actively re-authenticate as described in the OIDC specification. Note that using max_age for a longer value than the default five minutes (or the one prescribed by password policy) is not supported. The max_age can be currently used only to make the value shorter than the default five minutes. If Step-up authentication is enabled and the action is to add or delete a credential, authentication is required with the level corresponding to the given credential. This requirement exists in case the user already has the credential of the particular level. For example, if otp and webauthn are configured in the authentication flow as 2nd-factor authenticators (both in the authentication flow at level 2) and the user already has a 2nd-factor credential ( otp or webauthn in this case), the user is required to authenticate with an existing 2nd-factor credential to add another 2nd-level credential. In the same manner, deleting an existing 2nd-factor credential ( otp or webauthn in this case), authentication with an existing 2nd-factor level credential is required. The requirement exists for security reasons. 5.6.2. Parameterized AIA Some AIA can require the parameter to be sent together with the action name. For instance, the Delete Credential action can be triggered only by AIA and it requires a parameter to be sent together with the name of the action, which points to the ID of the removed credential. So the URL for this example would be kc_action=delete_credential:ce1008ac-f811-427f-825a-c0b878d1c24b . In this case, the part after the colon character ( ce1008ac-f811-427f-825a-c0b878d1c24b ) contains the ID of the credential of the particular user, which is to be deleted. The Delete Credential action displays the confirmation screen where the user can confirm agreement to delete the credential. Note The Red Hat build of Keycloak Account Console typically uses the Delete Credential action when deleting a 2nd-factor credential. You can check the Account Console for examples if you want to use this action directly from your own applications. However, relying on the Account Console is best instead of managing credentials from your own applications. 5.6.3. Available actions To see all available actions, log in to the Admin Console and go to the top right top corner to click Realm info tab Provider info Find provider required-action . But note that this can be further restricted based on what actions are enabled for your realm in the Required actions tab . 5.7. Searching for a user Search for a user to view detailed information about the user, such as the user's groups and roles. Prerequisite You are in the realm where the user exists. 5.7.1. Default search Procedure Click Users in the main menu. This Users page is displayed. Type the full name, last name, first name, or email address of the user you want to search for in the search box. The search returns all users that match your criteria. The criteria used to match users depends on the syntax used on the search box: "somevalue" performs exact search of the string "somevalue" ; *somevalue* performs infix search, akin to a LIKE '%somevalue%' DB query; somevalue* or somevalue performs prefix search, akin to a LIKE 'somevalue%' DB query. 5.7.2. Attribute search Procedure Click Users in the main menu. This Users page is displayed. Click Default search button and switch it to Attribute search . Click Select attributes button and specify the attributes to search by. Check Exact search checkbox to perform exact match or keep it unchecked to use an infix search for attribute values. Click Search button to perform the search. It returns all users that match the criteria. Note Searches performed in the Users page encompass both Red Hat build of Keycloak's database and configured user federation backends, such as LDAP. Users found in federated backends will be imported into Red Hat build of Keycloak's database if they don't already exist there. Additional Resources For more information on user federation, see User Federation . 5.8. Deleting a user You can delete a user, who no longer needs access to applications. If a user is deleted, the user profile and data is also deleted. Procedure Click Users in the menu. The Users page is displayed. Click View all users to find a user to delete. Note Alternatively, you can use the search bar to find a user. Click Delete from the action menu to the user you want to remove and confirm deletion. 5.9. Enabling account deletion by users End users and applications can delete their accounts in the Account Console if you enable this capability in the Admin Console. Once you enable this capability, you can give that capability to specific users. 5.9.1. Enabling the Delete Account Capability You enable this capability on the Required Actions tab. Procedure Click Authentication in the menu. Click the Required Actions tab. Select Enabled on the Delete Account row. Delete account on required actions tab 5.9.2. Giving a user the delete-account role You can give specific users a role that allows account deletion. Procedure Click Users in the menu. Select a user. Click the Role Mappings tab. Click the Assign role button. Click account delete-account . Click Assign . Delete-account role 5.9.3. Deleting your account Once you have the delete-account role, you can delete your own account. Log into the Account Console. At the bottom of the Personal Info page, click Delete Account . Delete account page Enter your credentials and confirm the deletion. Delete confirmation Note This action is irreversible. All your data in Red Hat build of Keycloak will be removed. 5.10. Impersonating a user An administrator with the appropriate permissions can impersonate a user. For example, if a user experiences a bug in an application, an administrator can impersonate the user to investigate or duplicate the issue. Any user with the impersonation role in the realm can impersonate a user. Procedure Click Users in the menu. Click a user to impersonate. From the Actions list, select Impersonate . If the administrator and the user are in the same realm, then the administrator will be logged out and automatically logged in as the user being impersonated. If the administrator and user are in different realms, the administrator will remain logged in, and additionally will be logged in as the user in that user's realm. In both instances, the Account Console of the impersonated user is displayed. Additional resources For more information on assigning administration permissions, see the Admin Console Access Control chapter. 5.11. Enabling reCAPTCHA To safeguard registration against bots, Red Hat build of Keycloak has integration with Google reCAPTCHA. Once reCAPTCHA is enabled, you can edit register.ftl in your login theme to configure the placement and styling of the reCAPTCHA button on the registration page. Procedure Enter the following URL in a browser: https://developers.google.com/recaptcha/ Create an API key to get your reCAPTCHA site key and secret. Note the reCAPTCHA site key and secret for future use in this procedure. Note The localhost works by default. You do not have to specify a domain. Navigate to the Red Hat build of Keycloak admin console. Click Authentication in the menu. Click the Flows tab. Select Registration from the list. Set the reCAPTCHA requirement to Required . This enables reCAPTCHA. Click the gear icon ⚙\ufe0f on the reCAPTCHA row. Click the Config link. Recaptcha config page Enter the Recaptcha Site Key generated from the Google reCAPTCHA website. Enter the Recaptcha Secret generated from the Google reCAPTCHA website. Authorize Google to use the registration page as an iframe. Note In Red Hat build of Keycloak, websites cannot include a login page dialog in an iframe. This restriction is to prevent clickjacking attacks. You need to change the default HTTP response headers that is set in Red Hat build of Keycloak. Click Realm Settings in the menu. Click the Security Defenses tab. Enter https://www.google.com in the field for the X-Frame-Options header. Enter https://www.google.com in the field for the Content-Security-Policy header. Additional resources For more information on extending and creating themes, see the Server Developer Guide . 5.12. Personal data collected by Red Hat build of Keycloak By default, Red Hat build of Keycloak collects the following data: Basic user profile data, such as the user email, first name, and last name. Basic user profile data used for social accounts and references to the social account when using a social login. Device information collected for audit and security purposes, such as the IP address, operating system name, and the browser name. The information collected in Red Hat build of Keycloak is highly customizable. The following guidelines apply when making customizations: Registration and account forms can contain custom fields, such as birthday, gender, and nationality. An administrator can configure Red Hat build of Keycloak to retrieve data from a social provider or a user storage provider such as LDAP. Red Hat build of Keycloak collects user credentials, such as password, OTP codes, and WebAuthn public keys. This information is encrypted and saved in a database, so it is not visible to Red Hat build of Keycloak administrators. Each type of credential can include non-confidential metadata that is visible to administrators such as the algorithm that is used to hash the password and the number of hash iterations used to hash the password. With authorization services and UMA support enabled, Red Hat build of Keycloak can hold information about some objects for which a particular user is the owner.
[ "\"attributes\": [ < { \"name\": \"jobTitle\", \"validations\": { \"options\": { \"options\":[ \"sweng\", \"swarch\" ] } }, \"annotations\": { \"inputType\": \"select\", \"inputOptionLabels\": { \"sweng\": \"Software Engineer\", \"swarch\": \"Software Architect\" } } } ]", "\"attributes\": [ { \"name\": \"jobTitle\", \"validations\": { \"options\": { \"options\":[ \"sweng\", \"swarch\" ] } }, \"annotations\": { \"inputType\": \"select-radiobuttons\", \"inputOptionLabels\": { \"sweng\": \"USD{jobtitle.swengineer}\", \"swarch\": \"USD{jobtitle.swarchitect}\" } } } ]", "import { registerElementAnnotatedBy } from \"./userProfile.js\"; registerElementAnnotatedBy({ name: 'kcMyCustomValidation', onAdd(element) { var listener = function (event) { // do something on keyup }; element.addEventListener(\"keyup\", listener); // returns a cleanup function to remove the event listener return () => element.removeEventListener(\"keyup\", listener); } });", "document.querySelectorAll(`[data-kcMyCustomValidation]`).forEach((element) => { var listener = function (evt) { // do something on keyup }; element.addEventListener(\"keyup\", listener); });", "{ \"unmanagedAttributePolicy\": \"DISABLED\", \"attributes\": [ { \"name\": \"myattribute\", \"multivalued\": false, \"displayName\": \"My Attribute\", \"group\": \"personalInfo\", \"required\": { \"roles\": [ \"user\", \"admin\" ], \"scopes\": [ \"foo\", \"bar\" ] }, \"permissions\": { \"view\": [ \"admin\", \"user\" ], \"edit\": [ \"admin\", \"user\" ] }, \"validations\": { \"email\": { \"max-local-length\": 64 }, \"length\": { \"max\": 255 } }, \"annotations\": { \"myannotation\": \"myannotation-value\" } } ], \"groups\": [ { \"name\": \"personalInfo\", \"displayHeader\": \"Personal Information\", \"annotations\": { \"foo\": [\"foo-value\"], \"bar\": [\"bar-value\"] } } ] }", "{ \"attributes\": [ { \"name\": \"myattribute\", \"required\": {} ] }", "{ \"attributes\": [ { \"name\": \"myattribute\", \"required\": { \"roles\": [\"user\"] } ] }", "{ \"attributes\": [ { \"name\": \"myattribute\", \"required\": { \"scopes\": [\"foo\"] } ] }", "{ \"attributes\": [ { \"name\": \"myattribute\", \"permissions\": { \"view\": [\"admin\"], \"edit\": [\"user\"] } ] }", "{ \"attributes\": [ { \"name\": \"myattribute\", \"annotations\": { \"foo\": [\"foo-value\"], \"bar\": [\"bar-value\"] } ] }", "https://developers.google.com/recaptcha/" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/assembly-managing-users_server_administration_guide
10.5. Host Tasks
10.5. Host Tasks 10.5.1. Adding Standard Hosts to the Red Hat Virtualization Manager Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge. Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. Procedure From the Administration Portal, click Compute Hosts . Click New . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, click the Advanced Parameters button to change the following advanced host settings: Disable automatic firewall configuration. Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click OK . The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the Events section of the Notification Drawer ( ). After a brief delay the host status changes to Up . 10.5.2. Adding a Satellite Host Provider Host The process for adding a Satellite host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Satellite host provider. Adding a Satellite Host Provider Host Click Compute Hosts . Click New . Use the drop-down menu to select the Host Cluster for the new host. Select the Foreman/Satellite check box to display the options for adding a Satellite host provider host and select the provider from which the host is to be added. Select either Discovered Hosts or Provisioned Hosts . Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists. Provisioned Hosts : Select a host from the Providers Hosts drop-down list. Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired. Enter the Name and SSH Port (Provisioned Hosts only) of the new host. Select an authentication method to use with the host. Enter the root user's password to use password authentication. Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only). You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings. Optionally disable automatic firewall configuration. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. You can configure the Power Management , SPM , Console , and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure. Click OK to add the host and close the window. The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the details view. After installation is complete, the status will update to Reboot . The host must be activated for the status to change to Up . 10.5.3. Configuring Satellite Errata Management for a Host Red Hat Virtualization can be configured to view errata from Red Hat Satellite. This enables the host administrator to receive updates about available errata, and their importance, in the same dashboard used to manage host configuration. For more information about Red Hat Satellite see the Red Hat Satellite Documentation . Red Hat Virtualization 4.3 supports errata management with Red Hat Satellite 6.5. Important Hosts are identified in the Satellite server by their FQDN. Hosts added using an IP address will not be able to report errata. This ensures that an external content host ID does not need to be maintained in Red Hat Virtualization. The Satellite account used to manage the host must have Administrator permissions and a default organization set. Configuring Satellite Errata Management for a Host Add the Satellite server as an external provider. See Section 14.2.1, "Adding a Red Hat Satellite Instance for Host Provisioning" for more information. Associate the required host with the Satellite server. Note The host must be registered to the Satellite server and have the katello-agent package installed. For information on how to configure a host registration and how to register a host and install the katello-agent package see Registering Hosts in the Red Hat Satellite document Managing Hosts . Click Compute Hosts and select the host. Click Edit . Select the Use Foreman/Satellite check box. Select the required Satellite server from the drop-down list. Click OK . The host is now configured to show the available errata, and their importance, in the same dashboard used to manage host configuration. 10.5.4. Explanation of Settings and Controls in the New Host and Edit Host Windows 10.5.5. Host General Settings Explained These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Satellite host provider hosts. The General settings table contains the information required on the General tab of the New Host or Edit Host window. Table 10.1. General settings Field Name Description Host Cluster The cluster and data center to which the host belongs. Use Foreman/Satellite Select or clear this check box to view or hide options for adding hosts provided by Satellite host providers. The following options are also available: Discovered Hosts Discovered Hosts - A drop-down list that is populated with the name of Satellite hosts discovered by the engine. Host Groups -A drop-down list of host groups available. Compute Resources - A drop-down list of hypervisors to provide compute resources. Provisioned Hosts Providers Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter . Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts. Name The name of the host. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Comment A field for adding plain text, human-readable comments regarding the host. Hostname The IP address or resolvable host name of the host. If a resolvable hostname is used, you must ensure for all addresses (IPv4 and IPv6) that the hostname is resolved to match the IP addresses (IPv4 and IPv6) used by the management network of the host. Password The password of the host's root user. This can only be given when you add the host; it cannot be edited afterwards. SSH Public Key Copy the contents in the text box to the /root/.ssh/authorized_hosts file on the host to use the Manager's SSH key instead of a password to authenticate with a host. Automatically configure host firewall When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter . SSH Fingerprint You can fetch the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter . 10.5.6. Host Power Management Settings Explained The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows. You can configure power management if the host has a supported power management card. Table 10.2. Power Management Settings Field Name Description Enable Power Management Enables power management on the host. Select this check box to enable the rest of the fields in the Power Management tab. Kdump integration Prevents the host from fencing while performing a kernel crash dump, so that the crash dump is not interrupted. In Red Hat Enterprise Linux 7.1 and later, kdump is available by default. If kdump is available on the host, but its configuration is not valid (the kdump service cannot be started), enabling Kdump integration will cause the host (re)installation to fail. If this is the case, see Section 10.6.4, "fence_kdump Advanced Configuration" . Disable policy control of power management Power management is controlled by the Scheduling Policy of the host's cluster . If power management is enabled and the defined low utilization value is reached, the Manager will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Select this check box to disable policy control. Agents by Sequential Order Lists the host's fence agents. Fence agents can be sequential, concurrent, or a mix of both. If fence agents are used sequentially, the primary agent is used first to stop or start a host, and if it fails, the secondary agent is used. If fence agents are used concurrently, both fence agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up. Fence agents are sequential by default. Use the up and down buttons to change the sequence in which the fence agents are used. To make two fence agents concurrent, select one fence agent from the Concurrent with drop-down list to the other fence agent. Additional fence agents can be added to the group of concurrent fence agents by selecting the group from the Concurrent with drop-down list to the additional fence agent. Add Fence Agent Click the + button to add a new fence agent. The Edit fence agent window opens. See the table below for more information on the fields in this window. Power Management Proxy Preference By default, specifies that the Manager will search for a fencing proxy within the same cluster as the host, and if no fencing proxy is found, the Manager will search in the same dc (data center). Use the up and down buttons to change the sequence in which these resources are used. This field is available under Advanced Parameters . The following table contains the information required in the Edit fence agent window. Table 10.3. Edit fence agent Settings Field Name Description Address The address to access your host's power management device. Either a resolvable hostname or an IP address. User Name User account with which to access the power management device. You can set up a user on the device, or use the default user. Password Password for the user accessing the power management device. Type The type of power management device in your host. Choose one of the following: apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices. apc_snmp - Use with APC 5.x power switch devices. bladecenter - IBM Bladecenter Remote Supervisor Adapter. cisco_ucs - Cisco Unified Computing System. drac5 - Dell Remote Access Controller for Dell computers. drac7 - Dell Remote Access Controller for Dell computers. eps - ePowerSwitch 8M+ network power switch. hpblade - HP BladeSystem. ilo , ilo2 , ilo3 , ilo4 - HP Integrated Lights-Out. ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices. rsa - IBM Remote Supervisor Adapter. rsb - Fujitsu-Siemens RSB management interface. wti - WTI Network Power Switch. For more information about power management devices, see Power Management in the Technical Reference . Port The port number used by the power management device to communicate with the host. Slot The number used to identify the blade of the power management device. Service Profile The service profile name used to identify the blade of the power management device. This field appears instead of Slot when the device type is cisco_ucs . Options Power management device specific options. Enter these as 'key=value'. See the documentation of your host's power management device for the options available. For Red Hat Enterprise Linux 7 hosts, if you are using cisco_ucs as the power management device, you also need to append ssl_insecure=1 to the Options field. Secure Select this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on the power management agent. 10.5.7. SPM Priority Settings Explained The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window. Table 10.4. SPM settings Field Name Description SPM Priority Defines the likelihood that the host will be given the role of Storage Pool Manager (SPM). The options are Low , Normal , and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal. 10.5.8. Host Console Settings Explained The Console settings table details the information required on the Console tab of the New Host or Edit Host window. Table 10.5. Console settings Field Name Description Override display address Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP). Display address The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP. 10.5.9. Network Provider Settings Explained The Network Provider settings table details the information required on the Network Provider tab of the New Host or Edit Host window. Table 10.6. Network Provider settings Field Name Description External Network Provider If you have added an external network provider and want the host's network to be provisioned by the external network provider, select one from the list. 10.5.10. Kernel Settings Explained The Kernel settings table details the information required on the Kernel tab of the New Host or Edit Host window. Common kernel boot parameter options are listed as check boxes so you can easily select them. For more complex changes, use the free text entry field to Kernel command line to add in any additional parameters required. If you change any kernel command line parameters, you must reinstall the host . Important If the host is attached to the Manager, you must place the host into maintenance mode before making changes. After making the changes, reinstall the host to apply the changes. Table 10.7. Kernel Settings Field Name Description Hostdev Passthrough & SR-IOV Enables the IOMMU flag in the kernel to allow a host device to be used by a virtual machine as if the device is a device attached directly to the virtual machine itself. The host hardware and firmware must also support IOMMU. The virtualization extension and IOMMU extension must be enabled on the hardware. See Configuring a Host for PCI Passthrough . IBM POWER8 has IOMMU enabled by default. Nested Virtualization Enables the vmx or svm flag to allow you to run virtual machines within virtual machines. This option is only intended for evaluation purposes and not supported for production purposes. The vdsm-hook-nestedvt hook must be installed on the host. Unsafe Interrupts If IOMMU is enabled but the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling this option. Note that you should only enable this option if the virtual machines on the host are trusted; having the option enabled potentially exposes the host to MSI attacks from the virtual machines. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes. PCI Reallocation If your SR-IOV NIC is unable to allocate virtual functions because of memory issues, consider enabling this option. The host hardware and firmware must also support PCI reallocation. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes. Kernel command line This field allows you to append more kernel parameters to the default parameters. Note If the kernel boot parameters are grayed out, click the reset button and the options will be available. 10.5.11. Hosted Engine Settings Explained The Hosted Engine settings table details the information required on the Hosted Engine tab of the New Host or Edit Host window. Table 10.8. Hosted Engine Settings Field Name Description Choose hosted engine deployment action Three options are available: None - No actions required. Deploy - Select this option to deploy the host as a self-hosted engine node. Undeploy - For a self-hosted engine node, you can select this option to undeploy the host and remove self-hosted engine related configurations. 10.5.12. Configuring Host Power Management Settings Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal. You must configure host power management in order to utilize host high availability and virtual machine high availability. For more information about power management devices, see Power Management in the Technical Reference . Configuring Power Management Settings Click Compute Hosts and select a host. Click Management Maintenance , and click OK to confirm. When the host is in maintenance mode, click Edit . Click the Power Management tab. Select the Enable Power Management check box to enable the fields. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump. Important If you enable or disable Kdump integration on an existing host, you must reinstall the host for kdump to be configured. Optionally, select the Disable policy control of power management check box if you do not want your host's power management to be controlled by the Scheduling Policy of the host's cluster . Click the plus ( + ) button to add a new power management device. The Edit fence agent window opens. Enter the User Name and Password of the power management device into the appropriate fields. Select the power management device Type in the drop-down list. Enter the IP address in the Address field. Enter the SSH Port number used by the power management device to communicate with the host. Enter the Slot number used to identify the blade of the power management device. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries. If both IPv4 and IPv6 IP addresses can be used (default), leave the Options field blank. If only IPv4 IP addresses can be used, enter inet4_only=1 . If only IPv6 IP addresses can be used, enter inet6_only=1 . Select the Secure check box to enable the power management device to connect securely to the host. Click Test to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification. Click OK to close the Edit fence agent window. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host's cluster and dc (datacenter) for a fencing proxy. Click OK . Note For IPv6, Red Hat Virtualization supports only static addressing. Dual-stack IPv4 and IPv6 addressing is not supported. The Management Power Management drop-down menu is now enabled in the Administration Portal. 10.5.13. Configuring Host Storage Pool Manager Settings The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources. The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Configuring SPM settings Click Compute Hosts . Click Edit . Click the SPM tab. Use the radio buttons to select the appropriate SPM priority for the host. Click OK . 10.5.14. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. Note If intel_iommu=on or amd_iommu=on works, you can try adding iommu=pt or amd_iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: To enable SR-IOV and assign dedicated virtual NICs to virtual machines, see https://access.redhat.com/articles/2335291 . 10.5.15. Moving a Host to Maintenance Mode Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. Hosts should be placed into maintenance mode before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage. When a host is placed into maintenance mode the Red Hat Virtualization Manager attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines. Note Virtual machines that are pinned to the host and cannot be migrated are shut down. You can check which virtual machines are pinned to the host by clicking Pinned to Host in the Virtual Machines tab of the host's details view. Placing a Host into Maintenance Mode Click Compute Hosts and select the desired host. Click Management Maintenance to open the Maintenance Host(s) confirmation window. Optionally, enter a Reason for moving the host into maintenance mode, which will appear in the logs and when the host is activated again. Note The host maintenance Reason field will only appear if it has been enabled in the cluster settings. See Section 8.2.2, "General Cluster Settings Explained" for more information. Optionally, select the required options for hosts that support Gluster. Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the default checks. By default, the Manager checks that the Gluster quorum is not lost when the host is moved to maintenance mode. The Manager also checks that there is no self-heal activity that will be affected by moving the host to maintenance mode. If the Gluster quorum will be lost or if there is self-heal activity that will be affected, the Manager prevents the host from being placed into maintenance mode. Only use this option if there is no other way to place the host in maintenance mode. Select the Stop Gluster Service option to stop all Gluster services while moving the host to maintenance mode. Note These fields will only appear in the host maintenance window when the selected host supports Gluster. See Replacing the Primary Gluster Storage Node in Maintaining Red Hat Hyperconverged Infrastructure for more information. Click OK to initiate maintenance mode. All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to another host. The Status field of the host changes to Preparing for Maintenance , and finally Maintenance when the operation completes successfully. VDSM does not stop while the host is in maintenance mode. Note If migration fails on any virtual machine, click Management Activate on the host to stop the operation placing it into maintenance mode, then click Cancel Migration on the virtual machine to stop the migration. 10.5.16. Activating a Host from Maintenance Mode A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used. Activation may fail if the host is not ready; ensure that all tasks are complete before attempting to activate the host. Activating a Host from Maintenance Mode Click Compute Hosts and select the host. Click Management Activate . The host status changes to Unassigned , and finally Up when the operation is complete. Virtual machines can now run on the host. Virtual machines that were migrated off the host when it was placed into maintenance mode are not automatically migrated back to the host when it is activated, but can be migrated manually. If the host was the Storage Pool Manager (SPM) before being placed into maintenance mode, the SPM role does not return automatically when the host is activated. 10.5.17. Configuring Host Firewall Rules You can configure the host firewall rules so that they are persistent, using Ansible. The cluster must be configured to use firewalld , not iptables . Note iptables is deprecated. Configuring Firewall Rules for Hosts On the Manager machine, edit ovirt-host-deploy-post-tasks.yml.example to add a custom firewall port: Save the file to another location as ovirt-host-deploy-post-tasks.yml . New or reinstalled hosts are configured with the updated firewall rules. Existing hosts must be reinstalled by clicking Installation Reinstall and selecting Automatically configure host firewall . 10.5.18. Removing a Host Remove a host from your virtualized environment. Removing a host Click Compute Hosts and select the host. Click Management Maintenance . when the host is in maintenance mode, click Remove to open the Remove Host(s) confirmation window. Select the Force Remove check box if the host is part of a Red Hat Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive. Click OK . 10.5.19. Updating Hosts Between Minor Releases You can update all hosts in a cluster , or update individual hosts . 10.5.19.1. Updating All Hosts in a Cluster You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See https://github.com/oVirt/ovirt-ansible-cluster-upgrade/blob/master/README.md for more information about the Ansible role used to automate the updates. Red Hat recommends updating one cluster at a time. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If migration is enabled at the cluster level, virtual machines are automatically migrated to another host in the cluster. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead. Procedure In the Administration Portal, click Compute Clusters and select the cluster. Click Upgrade . Select the hosts to update, then click . Configure the options: Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update. Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60 . You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default. Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot. Use Maintenance Policy sets the cluster's scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option. Click . Review the summary of the hosts and virtual machines that will be affected. Click Upgrade . You can track the progress of host updates in the Compute Hosts view, and in the Events section of the Notification Drawer ( ). You can track the progress of individual virtual machine migrations in the Status column of the Compute Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines. 10.5.19.2. Updating Individual Hosts Use the host upgrade manager to update individual hosts directly from the Administration Portal. Note The upgrade manager only checks hosts with a status of Up or Non-operational , but not Maintenance . Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If migration is enabled at the cluster level, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. Do not update all hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host. Procedure Ensure that the correct repositories are enabled. To view a list of currently enabled repositories, run yum repolist . For Red Hat Virtualization Hosts: For Red Hat Enterprise Linux hosts: In the Administration Portal, click Compute Hosts and select the host to be updated. Click Installation Check for Upgrade and click OK . Open the Notification Drawer ( ) and expand the Events section to see the result. If an update is available, click Installation Upgrade . Click OK to update the host. Running virtual machines are migrated according to their migration policy. If migration is disabled for any virtual machines, you are prompted to shut them down. The details of the host are updated in Compute Hosts and the status transitions through these stages: Maintenance > Installing > Reboot > Up Note If the update fails, the host's status changes to Install Failed . From Install Failed you can click Installation Upgrade again. Repeat this procedure for each host in the Red Hat Virtualization environment. Red Hat recommends updating the hosts from the Administration Portal. However, you can update the hosts using yum update instead: 10.5.19.3. Manually Updating Hosts You can use the yum command to update your hosts. Update your systems regularly, to ensure timely application of security and bug fixes. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If migration is enabled at the cluster level, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. Do not update all hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host. Procedure Ensure the correct repositories are enabled. You can check which repositories are currently enabled by running yum repolist . For Red Hat Virtualization Hosts: For Red Hat Enterprise Linux hosts: In the Administration Portal, click Compute Hosts and select the host to be updated. Click Management Maintenance . Update the host: Reboot the host to ensure all updates are correctly applied. Note Check the imgbased logs to see if any additional package updates have failed for a Red Hat Virtualization Host. If some packages were not successfully reinstalled after the update, check that the packages are listed in /var/imgbased/persisted-rpms . Add any missing packages then run rpm -Uvh /var/imgbased/persisted-rpms/* . Repeat this process for each host in the Red Hat Virtualization environment. 10.5.20. Reinstalling Hosts Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host. Prerequisites If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the host's usage is relatively low. Ensure that the cluster has sufficient memory reserve in order for its hosts to perform maintenance. If a cluster lacks sufficient memory, the virtual machine migration operation will hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance. Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks. Procedure Click Compute Hosts and select the host. Click Management Maintenance . Click Installation Reinstall to open the Install Host window. Click OK to reinstall the host. Once successfully reinstalled, the host displays a status of Up . Any virtual machines that were migrated off the host can now be migrated back to it. Important After a Red Hat Virtualization Host is successfully registered to the Red Hat Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed . Click Management Activate , and the host will change to an Up status and be ready for use. 10.5.21. Viewing Host Errata Errata for each host can be viewed after the host has been configured to receive errata information from the Red Hat Satellite server. For more information on configuring a host to receive errata information see Section 10.5.3, "Configuring Satellite Errata Management for a Host" Viewing Host Errata Click Compute Hosts . Click the host's name to open the details view. Click the Errata tab. 10.5.22. Viewing the Health Status of a Host Hosts have an external health status in addition to their regular Status . The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the host's Name as one of the following icons: OK : No icon Info : Warning : Error : Failure : To view further details about the host's health status, click the host's name to open the details view, and click the Events tab. The host's health status can also be viewed using the REST API. A GET request on a host will include the external_status element, which contains the health status. You can set a host's health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide . 10.5.23. Viewing Host Devices You can view the host devices for each host in the Host Devices tab in the details view. If the host has been configured for direct device assignment, these devices can be directly attached to virtual machines for improved performance. For more information on the hardware requirements for direct device assignment, see Additional Hardware Considerations for Using Device Assignment in Hardware Considerations for Implementing SR-IOV . For more information on configuring the host for direct device assignment, see Section 10.5.14, "Configuring a Host for PCI Passthrough" . For more information on attaching host devices to virtual machines, see Host Devices in the Virtual Machine Management Guide . Viewing Host Devices Click Compute Hosts . Click the host's name to open the details view. Click Host Devices tab. This tab lists the details of the host devices, including whether the device is attached to a virtual machine, and currently in use by that virtual machine. 10.5.24. Accessing Cockpit from the Administration Portal Cockpit is available by default on Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. You can access the Cockpit web interface by typing the address into a browser, or through the Administration Portal. Accessing Cockpit from the Administration Portal In the Administration Portal, click Compute Hosts and select a host. Click Host Console . The Cockpit login page opens in a new browser window. 10.5.25. Setting a Legacy SPICE Cipher SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is: kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine. You can change the cipher string by using an Ansible playbook. Changing the cipher string On the Manager machine, create a file in the directory /usr/share/ovirt-engine/playbooks . For example: Enter the following in the file and save it: name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption Run the file you just created: Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy using the --extra-vars option with the variable host_deploy_spice_cipher_string , as follows:
[ "vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on", "vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on", "vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1", "grub2-mkconfig -o /boot/grub2/grub.cfg", "reboot", "vi /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml.example --- # Any additional tasks required to be executing during host deploy process can be added below # - name: Enable additional port on firewalld firewalld: port: \" 12345/tcp \" permanent: yes immediate: yes state: enabled", "subscription-manager repos --enable=rhel-7-server-rhvh-4-rpms", "subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms", "subscription-manager repos --enable=rhel-7-server-rhvh-4-rpms", "subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms", "yum update", "vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml", "name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption", "ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml", "ansible-playbook -l hostname --extra-vars host_deploy_spice_cipher_string=\"DEFAULT:-RC4:-3DES:-DES\" /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Host_Tasks
2.7. Design-Time and Runtime Metadata
2.7. Design-Time and Runtime Metadata Teiid Designer software distinguishes between design-time metadata and runtime metadata. This distinction becomes important if you use the JBoss Data Virtualization Server. Design-time data is laden with details and representations that help the user understand and efficiently organize metadata. Much of that detail is unnecessary to the underlying system that runs the Virtual Database that you will create. Any information that is not absolutely necessary to running the Virtual Database is stripped out of the runtime metadata to ensure maximum system performance.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/design-time_and_runtime_metadata
Chapter 2. Compiling Protobuf Schema
Chapter 2. Compiling Protobuf Schema Data Grid uses the ProtoStream API to store data as Protobuf-encoded entries. Protobuf is a language-neutral format that allows clients to create and retrieve entries in remote caches using both Hot Rod and REST endpoints. 2.1. Compiling Protobuf schema on Red Hat Enterprise Linux (RHEL) Compile Protobuf schema, .proto files, into C++ header and source files to describe your data to Data Grid. Prerequisites Install the Protobuf library and protobuf-devel package. Procedure Set the LD_LIBRARY_PATH environment variable, if it is not already set. Compile Protobuf schema for the Hot Rod C++ client as required. HR_PROTO_EXPORT is a macro that the Hot Rod C++ client expands when it compiles the Protobuf schema. Register your Protobuf schema with Data Grid if you plan to use queries. Additional resources Registering Protobuf Schemas 2.2. Compiling Protobuf schema on Microsoft Windows Compile Protobuf schema, .proto files, into C++ header and source files to describe your data to Data Grid. Procedure Open a command prompt to the installation directory for the Hot Rod C++ client. Compile Protobuf schema for the Hot Rod C++ client as required. HR_PROTO_EXPORT is a macro that the Hot Rod C++ client expands when it compiles the Protobuf schema. Register your Protobuf schema with Data Grid if you plan to use queries. Additional resources Registering Protobuf Schemas
[ "yum install protobuf yum install protobuf-devel", "export LD_LIBRARY_PATH=USDLD_LIBRARY_PATH:/opt/lib64", "/bin/protoc --cpp_out dllexport_decl=HR_PROTO_EXPORT:/path/to/output/ USDFILE", "bin\\protoc --cpp_out dllexport_decl=HR_PROTO_EXPORT:path\\to\\output\\ USDFILE" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_cpp_client_guide/compiling_schema
15.14. Removing the Changelog
15.14. Removing the Changelog The changelog is a record of all modifications on a given replica that the supplier uses to replay these modifications to replicas on consumer servers (or suppliers in the case of multi-supplier replication). If a supplier server goes offline, it is important to be able to delete the changelog because it no longer holds a true record of all modifications and, as a result, should not be used as a basis for replication. A changelog can be effectively deleted by deleting the log file. 15.14.1. Removing the Changelog using the Command Line To remove the changelog from the supplier server: Verify whether replication is disabled for all suffixes: Remove the changelog: 15.14.2. Removing the Changelog using the Web Console To remove the changelog from the supplier server: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Replication menu, and select the Replication Changelog entry. Click Delete Changelog .
[ "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication list There are no replicated suffixes", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication delete-changelog" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/removing_the_changelog
Chapter 3. Metro-DR solution for OpenShift Data Foundation
Chapter 3. Metro-DR solution for OpenShift Data Foundation The section of the guide provides details of the Metro Disaster Recovery (Metro DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the OpenShift Container Platform clusters of less than 10ms RTT latency. The persistent storage for applications is provided by an external Red Hat Ceph Storage (RHCS) cluster stretched between the two locations with the OpenShift Container Platform instances connected to this storage cluster. An arbiter node with a storage monitor service is required at a third location (different location than where OpenShift Container Platform instances are deployed) to establish quorum for the RHCS cluster in the case of a site outage. This third location can be in the range of ~100ms RTT from the storage cluster connected to the OpenShift Container Platform instances. This is a general overview of the Metro DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster. 3.1. Components of Metro-DR solution Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane Managed clusters: components that run on the clusters that are managed For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . Red Hat Ceph Storage Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments. For more product information, see Red Hat Ceph Storage . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift DR OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships. OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 3.2. Metro-DR deployment workflow This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using the latest versions of Red Hat OpenShift Data Foundation, Red Hat Ceph Storage (RHCS) and Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.8 or later, across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management. To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Metro-DR . Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage . Deploy and configure Red Hat Ceph Storage stretch mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Deploying Red Hat Ceph Storage . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note The Metro-DR solution can only have one DRpolicy. Testing your disaster recovery solution with: Subscription-based application: Create sample applications. See Creating sample application . Test failover and relocate operations using the sample application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . 3.3. Requirements for enabling Metro-DR Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: You must have the following OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed. Primary managed cluster where OpenShift Data Foundation is installed. Secondary managed cluster where OpenShift Data Foundation is installed. Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important It is the user's responsibility to ensure that application traffic routing and redirection are configured appropriately. Configuration and updates to the application traffic routes are currently not supported. On the Hub cluster, navigate to All Clusters Infrastructure Clusters. Ensure that you either import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Warning There are distance limitations between the locations where the OpenShift Container Platform managed clusters reside as well as the RHCS nodes. The network latency between the sites must be below 10 milliseconds round-trip time (RTT). 3.4. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it. This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for Red Hat Ceph Storage 6.1 . Note Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss. Important Erasure coded pools cannot be used with stretch mode. 3.4.1. Hardware requirements For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph . Table 3.1. Physical server locations and Ceph component layout for Red Hat Ceph Storage cluster deployment: Node name Datacenter Ceph components ceph1 DC1 OSD+MON+MGR ceph2 DC1 OSD+MON ceph3 DC1 OSD+MDS+RGW ceph4 DC2 OSD+MON+MGR ceph5 DC2 OSD+MON ceph6 DC2 OSD+MDS+RGW ceph7 DC3 MON 3.4.2. Software requirements Use the latest software version of Red Hat Ceph Storage 6.1 . For more information on the supported Operating System versions for Red Hat Ceph Storage, see knowledgebase article on Red Hat Ceph Storage: Supported configurations . 3.4.3. Network configuration requirements The recommended Red Hat Ceph Storage configuration is as follows: You must have two separate networks, one public network and one private network. You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters. Note You can use different subnets for each of the datacenters. The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high up to 100 ms RTT to the other two OSD datacenters. Here is an example of a basic network configuration that we have used in this guide: DC1: Ceph public/private network: 10.0.40.0/24 DC2: Ceph public/private network: 10.0.40.0/24 DC3: Ceph public/private network: 10.0.40.0/24 For more information on the required network environment, see Ceph network configuration . 3.5. Deploying Red Hat Ceph Storage 3.5.1. Node pre-deployment steps Before installing the Red Hat Ceph Storage Ceph cluster, perform the following steps to fulfill all the requirements needed. Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool: subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0 Enable access for all the nodes in the Ceph cluster for the following repositories: rhel9-for-x86_64-baseos-rpms rhel9-for-x86_64-appstream-rpms subscription-manager repos --disable="*" --enable="rhel9-for-x86_64-baseos-rpms" --enable="rhel9-for-x86_64-appstream-rpms" Update the operating system RPMs to the latest version and reboot if needed: dnf update -y reboot Select a node from the cluster to be your bootstrap node. ceph1 is our bootstrap node in this example going forward. Only on the bootstrap node ceph1 , enable the ansible-2.9-for-rhel-9-x86_64-rpms and rhceph-6-tools-for-rhel-9-x86_64-rpms repositories: subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-6-tools-for-rhel-9-x86_64-rpms" Configure the hostname using the bare/short hostname in all the hosts. hostnamectl set-hostname <short_name> Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm. USD hostname Example output: Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name. Check the long hostname with the fqdn using the hostname -f option. USD hostname -f Example output: Note To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names . Run the following steps on bootstrap node. In our example, the bootstrap node is ceph1 . Install the cephadm-ansible RPM package: USD sudo dnf install -y cephadm-ansible Important To run the ansible playbooks, you must have ssh passwordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example, deployment-user ) has root privileges to invoke the sudo command without needing a password. To use a custom key, configure the selected user (for example, deployment-user ) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh: cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF Build the ansible inventory cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF Note Here, the Hosts ( Ceph1 and Ceph4 ) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as _admin by cephadm . Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node. Verify that ansible can access all nodes using ping module before running the pre-flight playbook. USD ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b Example output: Navigate to the /usr/share/cephadm-ansible directory. Run ansible-playbook with relative file paths. USD ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" The preflight playbook Ansible playbook configures the RHCS dnf repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . For additional information, see Running the preflight playbook 3.5.2. Cluster bootstrapping and service deployment with Cephadm The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run. In this guide we are going to bootstrap the cluster and deploy all the needed Red Hat Ceph Storage services in one step using a cluster specification yaml file. If you find issues during the deployment, it may be easier to troubleshoot the errors by dividing the deployment into two steps: Bootstrap Service deployment Note For additional information on the bootstrapping process, see Bootstrapping a new storage cluster . Procedure Create json file to authenticate against the container registry using a json file as follows: USD cat <<EOF > /root/registry.json { "url":"registry.redhat.io", "username":"User", "password":"Pass" } EOF Create a cluster-spec.yaml that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run following table 3.1. cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: "mon" --- service_type: mds service_id: cephfs placement: label: "mds" --- service_type: mgr service_name: mgr placement: label: "mgr" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: "osd" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: "rgw" spec: rgw_frontend_port: 8080 EOF Retrieve the IP for the NIC with the Red Hat Ceph Storage public network configured from the bootstrap node. After substituting 10.0.40.0 with the subnet that you have defined in your ceph public network, execute the following command. USD ip a | grep 10.0.40 Example output: Run the Cephadm bootstrap command as the root user on the node that will be the initial Monitor node in the cluster. The IP_ADDRESS option is the node's IP address that you are using to run the cephadm bootstrap command. Note If you have configured a different user instead of root for passwordless SSH access, then use the --ssh-user= flag with the cepadm bootstrap command. If you are using non default/id_rsa ssh key names, then use --ssh-private-key and --ssh-public-key options with cephadm command. USD cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json Important If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Once the bootstrap finishes, you will see the following output from the cephadm bootstrap command: You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/ Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1: USD ceph -s Example output: Note It may take several minutes for all the services to start. It is normal to get a global recovery event while you don't have any osds configured. You can use ceph orch ps and ceph orch ls to further check the status of the services. Verify if all the nodes are part of the cephadm cluster. USD ceph orch host ls Example output: Note You can run Ceph commands directly from the host because ceph1 was configured in the cephadm-ansible inventory as part of the [admin] group. The Ceph admin keys were copied to the host during the cephadm bootstrap process. Check the current placement of the Ceph monitor services on the datacenters. USD ceph orch ps | grep mon | awk '{print USD1 " " USD2}' Example output: Check the current placement of the Ceph manager services on the datacenters. Example output: Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is UP . Also, double-check that each node is under the right datacenter bucket as specified in table 3.1 USD ceph osd tree Example output: Create and enable a new RDB block pool. Note The number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator . Verify that the RBD pool has been created. Example output: Verify that MDS services are active and has located one service on each datacenter. Example output: Create the CephFS volume. USD ceph fs volume create cephfs Note The ceph fs volume create command also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems . Check the Ceph status to verify how the MDS daemons have been deployed. Ensure that the state is active where ceph6 is the primary MDS for this filesystem and ceph3 is the secondary MDS. USD ceph fs status Example output: Verify that RGW services are active. USD ceph orch ps | grep rgw Example output: 3.5.3. Configuring Red Hat Ceph Storage stretch mode Once the Red Hat Ceph Storage cluster is fully deployed using cephadm , use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case. Procedure Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic. ceph mon dump | grep election_strategy Example output: Change the monitor election to connectivity. ceph mon set election_strategy connectivity Run the ceph mon dump command again to verify the election_strategy value. USD ceph mon dump | grep election_strategy Example output: To know more about the different election strategies, see Configuring monitor election strategy . Set the location for all our Ceph monitors: ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3 Verify that each monitor has its appropriate location. USD ceph mon dump Example output: Create a CRUSH rule that makes use of this OSD crush topology by installing the ceph-base RPM package in order to use the crushtool command: USD dnf -y install ceph-base To know more about CRUSH ruleset, see Ceph CRUSH ruleset . Get the compiled CRUSH map from the cluster: USD ceph osd getcrushmap > /etc/ceph/crushmap.bin Decompile the CRUSH map and convert it to a text file in order to be able to edit it: USD crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt Add the following rule to the CRUSH map by editing the text file /etc/ceph/crushmap.txt at the end of the file. USD vim /etc/ceph/crushmap.txt This example is applicable for active applications in both OpenShift Container Platform clusters. Note The rule id has to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the free id. The CRUSH rule declared contains the following information: Rule name Description: A unique whole name for identifying the rule. Value: stretch_rule id Description: A unique whole number for identifying the rule. Value: 1 type Description: Describes a rule for either a storage drive replicated or erasure-coded. Value: replicated min_size Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule. Value: 1 max_size Description: If a pool makes more replicas than this number, CRUSH will not select this rule. Value: 10 step take default Description: Takes the root bucket called default , and begins iterating down the tree. step choose firstn 0 type datacenter Description: Selects the datacenter bucket, and goes into it's subtrees. step chooseleaf firstn 2 type host Description: Selects the number of buckets of the given type. In this case, it is two different hosts located in the datacenter it entered at the level. step emit Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule. Compile the new CRUSH map from the file /etc/ceph/crushmap.txt and convert it to a binary file called /etc/ceph/crushmap2.bin : USD crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin Inject the new crushmap we created back into the cluster: USD ceph osd setcrushmap -i /etc/ceph/crushmap2.bin Example output: Note The number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map. Verify that the stretched rule created is now available for use. ceph osd crush rule ls Example output: Enable the stretch cluster mode. USD ceph mon enable_stretch_mode ceph7 stretch_rule datacenter In this example, ceph7 is the arbiter node, stretch_rule is the crush rule we created in the step and datacenter is the dividing bucket. Verify all our pools are using the stretch_rule CRUSH rule we have created in our Ceph cluster: USD for pool in USD(rados lspools);do echo -n "Pool: USD{pool}; ";ceph osd pool get USD{pool} crush_rule;done Example output: This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available. 3.6. Installing OpenShift Data Foundation on managed clusters In order to configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation operator must be installed first on each managed cluster. Prerequisites Ensure that you have met the hardware requirements for OpenShift Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements . Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. After installing the operator, create a StorageSystem using the option Full deployment type and Connect with external storage platform where your Backing storage type is Red Hat Ceph Storage . For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Use the following flags with the ceph-external-cluster-details-exporter.py script. At a minimum, you must use the following three flags with the ceph-external-cluster-details-exporter.py script : --rbd-data-pool-name With the name of the RBD pool that was created during RHCS deployment for OpenShift Container Platform. For example, the pool can be called rbdpool . --rgw-endpoint Provide the endpoint in the format <ip_address>:<port> . It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. --run-as-user With a different client name for each site. The following flags are optional if default values were used during the RHCS deployment: --cephfs-filesystem-name With the name of the CephFS filesystem we created during RHCS deployment for OpenShift Container Platform, the default filesystem name is cephfs . --cephfs-data-pool-name With the name of the CephFS data pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.data . --cephfs-metadata-pool-name With the name of the CephFS metadata pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.meta . Run the following command on the bootstrap node ceph1 , to get the IP for the RGW endpoints in datacenter1 and datacenter2: Example output: Example output: Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster1 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster2 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Save the two files generated in the bootstrap cluster (ceph1) ocp-cluster1.json and ocp-cluster2.json to your local machine. Use the contents of file ocp-cluster1.json on the OpenShift Container Platform console on cluster1 where external OpenShift Data Foundation is being deployed. Use the contents of file ocp-cluster2.json on the OpenShift Container Platform console on cluster2 where external OpenShift Data Foundation is being deployed. Review the settings and then select Create StorageSystem . Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): Wait for the status result to be Ready for both queries on the Primary managed cluster and the Secondary managed cluster . On the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. 3.7. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 3.8. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 3.9. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters Data Services Data policies . Click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4perf1-ocp4perf2 ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Replication policy is automatically set to sync based on the OpenShift clusters selected. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run command for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed clusters. Verify that the secret is propagated correctly on the Primary managed cluster and the Secondary managed cluster . Match the output with the s3SecretRef from the Hub cluster : 3.10. Configure DRClusters for fencing automation This configuration is required for enabling fencing prior to application failover. In order to prevent writes to the persistent volume from the cluster which is hit by a disaster, OpenShift DR instructs Red Hat Ceph Storage (RHCS) to fence the nodes of the cluster from the RHCS external storage. This section guides you on how to add the IPs or the IP Ranges for the nodes of the DRCluster. 3.10.1. Add node IP addresses to DRClusters Find the IP addresses for all of the OpenShift nodes in the managed clusters by running this command in the Primary managed cluster and the Secondary managed cluster . Example output: Once you have the IP addresses then the DRCluster resources can be modified for each managed cluster. Find the DRCluster names on the Hub Cluster. Example output: Edit each DRCluster to add your unique IP addresses after replacing <drcluster_name> with your unique name. Example output: Note There could be more than six IP addresses. Modify this DRCluster configuration also for IP addresses on the Secondary managed clusters in the peer DRCluster resource (e.g., ocp4perf2). 3.10.2. Add fencing annotations to DRClusters Add the following annotations to all the DRCluster resources. These annotations include details needed for the NetworkFence resource created later in these instructions (prior to testing application failover). Note Replace <drcluster_name> with your unique name. Example output: Make sure to add these annotations for both DRCluster resources (for example: ocp4perf1 and ocp4perf2 ). 3.11. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 3.11.1. Subscription-based applications 3.11.1.1. Creating a sample application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites Ensure that the Red Hat OpenShift GitOps operator is installed on the Hub cluster. For instructions, see RHACM documentation . When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions, that belong together, to refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they will also be DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples where the Branch is release-4.13 and Path is busybox-odr-metro . Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Login to your managed cluster where busybox was deployed by RHACM. Example output: 3.11.1.2. Apply DRPolicy to sample application Prerequisites Ensure that both managed clusters referenced in the DRPolicy are reachable. If not, the application will not be DR protected till both clusters are online. Procedure On the Hub cluster, navigate to All Clusters . Navigate to Data Services and then click Data policies . Click the Actions menu at the end of DRPolicy to view the list of available actions. Click Assign DRPolicy . When the Assign DRPolicy modal is displayed, select busybox application and enter PVC label as appname=busybox . Click Apply . Verify that a DRPlacementControl or DRPC was created in the busybox-sample namespace on the Hub cluster and that it's CURRENTSTATE shows as Deployed . This resource is used for both failover and relocate actions for this application. Note Editing of PlacementRef , DRPolicyRef and PVCSelector field values in the yaml are not supported. Example output: After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.2. ApplicationSet-based applications 3.11.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on the Hub cluster. For instructions, see RHACM documentation . Ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Procedure On the Hub cluster, navigate to All Clusters Applications and click Create application . Select type as Application set . In General step 1, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.13 Choose Path as busybox-odr-metro . Enter Remote namespace value. (example, busybox-sample) and click . Select Sync policy settings and click . You can choose one or more options. Add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 3.11.2.2. Apply DRPolicy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the DRPolicy are reachable. If not, the application will not be DR protected till both clusters are online. Procedure On the Hub cluster, navigate to All Clusters . Navigate to Data Services and then click Data policies . Click the Actions menu at the end of DRPolicy to view the list of available actions. Click Manage policy for Application sets . Enter PVC label as appname=busybox . Click Save changes . The application is tagged as protected . Click Apply . Verify that a DRPlacementControl or DRPC was created in the busybox-sample namespace on the Hub cluster and that its CURRENTSTATE shows as Deployed . This resource is used for both failover and relocate actions for this application. Note Editing of PlacementRef , DRPolicyRef and PVCSelector field values in the yaml are not supported. Example output: After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.3. Deleting sample application You can delete the sample application busybox using the RHACM console. Note The instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, the DRPlacementControl must also be deleted after deleting the busybox application. Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . Click OpenShift DR Hub Operator and then click DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 3.12. Subscription-based application failover between managed clusters Prerequisites When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox application is now failing over to the Secondary-managed cluster . Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.13. ApplicationSet-based application failover between managed clusters Prerequisites When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 3.14. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite When primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with your unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with your unique name. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.15. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite When primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with your unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with your unique name. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application.
[ "subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0", "subscription-manager repos --disable=\"*\" --enable=\"rhel9-for-x86_64-baseos-rpms\" --enable=\"rhel9-for-x86_64-appstream-rpms\"", "dnf update -y reboot", "subscription-manager repos --enable=\"ansible-2.9-for-rhel-9-x86_64-rpms\" --enable=\"rhceph-6-tools-for-rhel-9-x86_64-rpms\"", "hostnamectl set-hostname <short_name>", "hostname", "ceph1", "DOMAIN=\"example.domain.com\" cat <<EOF >/etc/hosts 127.0.0.1 USD(hostname).USD{DOMAIN} USD(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 USD(hostname).USD{DOMAIN} USD(hostname) localhost6 localhost6.localdomain6 EOF", "hostname -f", "ceph1.example.domain.com", "sudo dnf install -y cephadm-ansible", "cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF", "cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF", "ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b", "ceph6 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph4 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph3 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph2 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph5 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph7 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" }", "ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "cat <<EOF > /root/registry.json { \"url\":\"registry.redhat.io\", \"username\":\"User\", \"password\":\"Pass\" } EOF", "cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_type: mds service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080 EOF", "ip a | grep 10.0.40", "10.0.40.78", "cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json", "You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/", "ceph -s", "cluster: id: 3a801754-e01f-11ec-b7ab-005056838602 health: HEALTH_OK services: mon: 5 daemons, quorum ceph1,ceph2,ceph4,ceph5,ceph7 (age 4m) mgr: ceph1.khuuot(active, since 5m), standbys: ceph4.zotfsp osd: 12 osds: 12 up (since 3m), 12 in (since 4m) rgw: 2 daemons active (2 hosts, 1 zones) data: pools: 5 pools, 107 pgs objects: 191 objects, 5.3 KiB usage: 105 MiB used, 600 GiB / 600 GiB avail 105 active+clean", "ceph orch host ls", "HOST ADDR LABELS STATUS ceph1 10.0.40.78 _admin osd mon mgr ceph2 10.0.40.35 osd mon ceph3 10.0.40.24 osd mds rgw ceph4 10.0.40.185 osd mon mgr ceph5 10.0.40.88 osd mon ceph6 10.0.40.66 osd mds rgw ceph7 10.0.40.221 mon", "ceph orch ps | grep mon | awk '{print USD1 \" \" USD2}'", "mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7", "ceph orch ps | grep mgr | awk '{print USD1 \" \" USD2}'", "mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5", "ceph osd tree", "ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87900 root default -16 0.43950 datacenter DC1 -11 0.14650 host ceph1 2 ssd 0.14650 osd.2 up 1.00000 1.00000 -3 0.14650 host ceph2 3 ssd 0.14650 osd.3 up 1.00000 1.00000 -13 0.14650 host ceph3 4 ssd 0.14650 osd.4 up 1.00000 1.00000 -17 0.43950 datacenter DC2 -5 0.14650 host ceph4 0 ssd 0.14650 osd.0 up 1.00000 1.00000 -9 0.14650 host ceph5 1 ssd 0.14650 osd.1 up 1.00000 1.00000 -7 0.14650 host ceph6 5 ssd 0.14650 osd.5 up 1.00000 1.00000", "ceph osd pool create 32 32 ceph osd pool application enable rbdpool rbd", "ceph osd lspools | grep rbdpool", "3 rbdpool", "ceph orch ps | grep mds", "mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9", "ceph fs volume create cephfs", "ceph fs status", "cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph6.ggjywj Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 96.0k 284G cephfs.cephfs.data data 0 284G STANDBY MDS cephfs.ceph3.ogcqkl", "ceph orch ps | grep rgw", "rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9", "ceph mon dump | grep election_strategy", "dumped monmap epoch 9 election_strategy: 1", "ceph mon set election_strategy connectivity", "ceph mon dump | grep election_strategy", "dumped monmap epoch 10 election_strategy: 3", "ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3", "ceph mon dump", "epoch 17 fsid dd77f050-9afe-11ec-a56c-029f8148ea14 last_changed 2022-03-04T07:17:26.913330+0000 created 2022-03-03T14:33:22.957190+0000 min_mon_release 16 (pacific) election_strategy: 3 0: [v2:10.0.143.78:3300/0,v1:10.0.143.78:6789/0] mon.ceph1; crush_location {datacenter=DC1} 1: [v2:10.0.155.185:3300/0,v1:10.0.155.185:6789/0] mon.ceph4; crush_location {datacenter=DC2} 2: [v2:10.0.139.88:3300/0,v1:10.0.139.88:6789/0] mon.ceph5; crush_location {datacenter=DC2} 3: [v2:10.0.150.221:3300/0,v1:10.0.150.221:6789/0] mon.ceph7; crush_location {datacenter=DC3} 4: [v2:10.0.155.35:3300/0,v1:10.0.155.35:6789/0] mon.ceph2; crush_location {datacenter=DC1}", "dnf -y install ceph-base", "ceph osd getcrushmap > /etc/ceph/crushmap.bin", "crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt", "vim /etc/ceph/crushmap.txt", "rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit } end crush map", "crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin", "ceph osd setcrushmap -i /etc/ceph/crushmap2.bin", "17", "ceph osd crush rule ls", "replicated_rule stretch_rule", "ceph mon enable_stretch_mode ceph7 stretch_rule datacenter", "for pool in USD(rados lspools);do echo -n \"Pool: USD{pool}; \";ceph osd pool get USD{pool} crush_rule;done", "Pool: device_health_metrics; crush_rule: stretch_rule Pool: cephfs.cephfs.meta; crush_rule: stretch_rule Pool: cephfs.cephfs.data; crush_rule: stretch_rule Pool: .rgw.root; crush_rule: stretch_rule Pool: default.rgw.log; crush_rule: stretch_rule Pool: default.rgw.control; crush_rule: stretch_rule Pool: default.rgw.meta; crush_rule: stretch_rule Pool: rbdpool; crush_rule: stretch_rule", "ceph orch ps | grep rgw.objectgw", "rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp", "host ceph3.example.com host ceph6.example.com", "ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --<rgw-endpoint> XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json", "oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'", "oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt", "oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt", "apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config", "oc create -f cm-clusters-crt.yaml", "configmap/user-ca-bundle created", "oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'", "proxy.config.openshift.io/cluster patched", "oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'", "Succeeded", "oc get drclusters", "oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'", "Succeeded", "oc get csv,pod -n openshift-dr-system", "NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.13.0 Openshift DR Cluster Operator 4.13.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.7.1 VolSync 0.7.1 volsync-product.v0.7.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-88f6d86fd-ckmrj 2/2 Running 0 150m", "get secrets -n openshift-dr-system | grep Opaque", "get cm -n openshift-operators ramen-hub-operator-config -oyaml", "oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type==\"ExternalIP\")].address}{\"\\n\"}{end}'", "10.70.56.118 10.70.56.193 10.70.56.154 10.70.56.242 10.70.56.136 10.70.56.99", "oc get drcluster", "NAME AGE ocp4perf1 5m35s ocp4perf2 5m35s", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: s3ProfileName: s3profile-<drcluster_name>-ocs-external-storagecluster ## Add this section cidrs: - <IP_Address1>/32 - <IP_Address2>/32 - <IP_Address3>/32 - <IP_Address4>/32 - <IP_Address5>/32 - <IP_Address6>/32 [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: ## Add this section annotations: drcluster.ramendr.openshift.io/storage-clusterid: openshift-storage drcluster.ramendr.openshift.io/storage-driver: openshift-storage.rbd.csi.ceph.com drcluster.ramendr.openshift.io/storage-secret-name: rook-csi-rbd-provisioner drcluster.ramendr.openshift.io/storage-secret-namespace: openshift-storage [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get pods,pvc -n busybox-sample", "NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s", "oc get drpc -n busybox-sample", "NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE busybox-placement-1-drpc 6m59s ocp4perf1 Deployed", "tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists", "oc get drpc -n busybox-sample", "NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE busybox-placement-1-drpc 6m59s ocp4perf1 Deployed", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Fenced", "ceph osd blocklist ls", "cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls", "oc edit drcluster <drcluster_name>", "apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]", "drcluster.ramendr.openshift.io/ocp4perf1 edited", "get pods -A | egrep -v 'Running|Completed'", "NAMESPACE NAME READY STATUS RESTARTS AGE", "oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'", "Unfenced", "ceph osd blocklist ls" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/metro-dr-solution
Chapter 7. Virtualization
Chapter 7. Virtualization Increased Maximum Number of vCPUs in KVM The maximum number of supported virtual CPUs (vCPUs) in a KVM guest has been increased to 240. This increases the amount of virtual processing units that a user can assign to the guest, and therefore improves its performance potential. 5th Generation Intel Core New Instructions Support in QEMU, KVM, and libvirt API In Red Hat Enterprise Linux 7.1, the support for 5th Generation Intel Core processors has been added to the QEMU hypervisor, the KVM kernel code, and the libvirt API. This allows KVM guests to use the following instructions and features: ADCX, ADOX, RDSFEED, PREFETCHW, and supervisor mode access prevention (SMAP). USB 3.0 Support for KVM Guests Red Hat Enterprise Linux 7.1 features improved USB support by adding USB 3.0 host adapter (xHCI) emulation as a Technology Preview. Compression for the dump-guest-memory Command Since Red Hat Enterprise Linux 7.1, the dump-guest-memory command supports crash dump compression. This makes it possible for users who cannot use the virsh dump command to require less hard disk space for guest crash dumps. In addition, saving a compressed guest crash dump usually takes less time than saving a non-compressed one. Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7.1. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. Improve Network Performance on Hyper-V Several new features of the Hyper-V network driver have been introduced to improve network performance. For example, Receive-Side Scaling, Large Send Offload, Scatter/Gather I/O are now supported, and network throughput is increased. hypervfcopyd in hyperv-daemons The hypervfcopyd daemon has been added to the hyperv-daemons packages. hypervfcopyd is an implementation of file copy service functionality for Linux Guest running on Hyper-V 2012 R2 host. It enables the host to copy a file (over VMBUS) into the Linux Guest. New Features in libguestfs Red Hat Enterprise Linux 7.1 introduces a number of new features in libguestfs , a set of tools for accessing and modifying virtual machine disk images. Namely: virt-builder - a new tool for building virtual machine images. Use virt-builder to rapidly and securely create guests and customize them. virt-customize - a new tool for customizing virtual machine disk images. Use virt-customize to install packages, edit configuration files, run scripts, and set passwords. virt-diff - a new tool for showing differences between the file systems of two virtual machines. Use virt-diff to easily discover what files have been changed between snapshots. virt-log - a new tool for listing log files from guests. The virt-log tool supports a variety of guests including Linux traditional, Linux using journal, and Windows event log. virt-v2v - a new tool for converting guests from a foreign hypervisor to run on KVM, managed by libvirt, OpenStack, oVirt, Red Hat Enterprise Virtualization (RHEV), and several other targets. Currently, virt-v2v can convert Red Hat Enterprise Linux and Windows guests running on Xen and VMware ESX. Flight Recorder Tracing Support for flight recorder tracing has been introduced in Red Hat Enterprise Linux 7.1. Flight recorder tracing uses SystemTap to automatically capture qemu-kvm data as long as the guest machine is running. This provides an additional avenue for investigating qemu-kvm problems, more flexible than qemu-kvm core dumps. For detailed instructions on how to configure and use flight recorder tracing, see the Virtualization Deployment and Administration Guide . LPAR Watchdog for IBM System z As a Technology Preview, Red Hat Enterprise Linux 7.1 introduces a new watchdog driver for IBM System z. This enhanced watchdog supports Linux logical partitions (LPAR) as well as Linux guests in the z/VM hypervisor, and provides automatic reboot and automatic dump capabilities if a Linux system becomes unresponsive. RDMA-based Migration of Live Guests The support for Remote Direct Memory Access (RDMA)-based migration has been added to libvirt . As a result, it is now possible to use the new rdma:// migration URI to request migration over RDMA, which allows for significantly shorter live migration of large guests. Note that prior to using RDMA-based migration, RDMA has to be configured and libvirt has to be set up to use it. Removal of Q35 Chipset, PCI Express Bus, and AHCI Bus Emulation Red Hat Enterprise Linux 7.1 removes the emulation of the Q35 machine type, required also for supporting the PCI Express (PCIe) bus and the Advanced Host Controller Interface (AHCI) bus in KVM guest virtual machines. These features were previously available on Red Hat Enterprise Linux as Technology Previews. However, they are still being actively developed and might become available in the future as part of Red Hat products.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Virtualization
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in three versions: 8u, 11u, and 17u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.2/pr01
5.4. Configuring Fence Devices
5.4. Configuring Fence Devices Configuring fence devices for the cluster consists of selecting one or more fence devices and specifying fence-device-dependent parameters (for example, name, IP address, login, and password). To configure fence devices, follow these steps: Click Fence Devices . At the bottom of the right frame (labeled Properties ), click the Add a Fence Device button. Clicking Add a Fence Device causes the Fence Device Configuration dialog box to be displayed (refer to Figure 5.4, "Fence Device Configuration" ). Figure 5.4. Fence Device Configuration At the Fence Device Configuration dialog box, click the drop-down box under Add a New Fence Device and select the type of fence device to configure. Specify the information in the Fence Device Configuration dialog box according to the type of fence device. Refer to Appendix B, Fence Device Parameters for more information about fence device parameters. Click OK . Choose File => Save to save the changes to the cluster configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-fence-devices-ca
14.3.4. Distributing and Trusting SSH CA Public Keys
14.3.4. Distributing and Trusting SSH CA Public Keys Hosts that are to allow certificate authenticated log in from users must be configured to trust the CA's public key that was used to sign the user certificates, in order to authenticate user's certificates. In this example that is the ca_user_key.pub . Publish the ca_user_key.pub key and download it to all hosts that are required to allow remote users to log in. Alternately, copy the CA user public key to all the hosts. In a production environment, consider copying the public key to an administrator account first. The secure copy command can be used to copy the public key to remote hosts. The command has the following format: scp ~/.ssh/ca_user_key.pub root@ host_name .example.com:/etc/ssh/ Where host_name is the host name of a server the is required to authenticate user's certificates presented during the login process. Ensure you copy the public key not the private key. For example, as root : For remote user authentication, CA keys can be marked as trusted per-user in the ~/.ssh/authorized_keys file using the cert-authority directive or for global use by means of the TrustedUserCAKeys directive in the /etc/ssh/sshd_config file. For remote host authentication, CA keys can be marked as trusted globally in the /etc/ssh/known_hosts file or per-user in the ~/.ssh/ssh_known_hosts file. Procedure 14.2. Trusting the User Signing Key For user certificates which have one or more principles listed, and where the setting is to have global effect, edit the /etc/ssh/sshd_config file as follows: TrustedUserCAKeys /etc/ssh/ca_user_key.pub Restart sshd to make the changes take effect: To avoid being presented with the warning about an unknown host, a user's system must trust the CA's public key that was used to sign the host certificates. In this example that is ca_host_key.pub . Procedure 14.3. Trusting the Host Signing Key Extract the contents of the public key used to sign the host certificate. For example, on the CA: To configure client systems to trust servers' signed host certificates, add the contents of the ca_host_key.pub into the global known_hosts file. This will automatically check a server's host advertised certificate against the CA public key for all users every time a new machine is connected to in the domain *.example.com . Login as root and configure the /etc/ssh/ssh_known_hosts file, as follows: Where ssh-rsa AAAAB5Wm. is the contents of ca_host_key.pub . The above configures the system to trust the CA servers host public key. This enables global authentication of the certificates presented by hosts to remote users.
[ "~]# scp ~/.ssh/ca_user_key.pub root@host_name.example.com:/etc/ssh/ The authenticity of host 'host_name.example.com (10.34.74.56)' can't be established. RSA key fingerprint is fc:23:ad:ae:10:6f:d1:a1:67:ee:b1:d5:37:d4:b0:2f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'host_name.example.com,10.34.74.56' (RSA) to the list of known hosts. root@host_name.example.com's password: ca_user_key.pub 100% 420 0.4KB/s 00:00", "~]# service sshd restart", "cat ~/.ssh/ca_host_key.pub ssh-rsa AAAAB5Wm. == [email protected]", "~]# vi /etc/ssh/ssh_known_hosts A CA key, accepted for any host in *.example.com @cert-authority *.example.com ssh-rsa AAAAB5Wm." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-distributing_and_trusting_ssh_ca_public_keys
Chapter 3. An active/passive NFS Server in a Red Hat High Availability Cluster
Chapter 3. An active/passive NFS Server in a Red Hat High Availability Cluster This chapter describes how to configure a highly available active/passive NFS server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. In this use case, clients access the NFS file system through a floating IP address. The NFS server runs on one of two nodes in the cluster. If the node on which the NFS server is running becomes inoperative, the NFS server starts up again on the second node of the cluster with minimal service interruption. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running the Apache HTTP server. In this example, the nodes used are z1.example.com and z2.example.com . A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . A public virtual IP address, required for the NFS server. Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device. Configuring a highly available active/passive NFS server on a two-node Red Hat Enterprise Linux High requires that you perform the following steps. Create the cluster that will run the NFS server and configure fencing for each node in the cluster, as described in Section 3.1, "Creating the NFS Cluster" . Configure an ext4 file system mounted on the LVM logical volume my_lv on the shared storage for the nodes in the cluster, as described in Section 3.2, "Configuring an LVM Volume with an ext4 File System" . Configure an NFS share on the shared storage on the LVM logical volume, as described in Section 3.3, "NFS Share Setup" . Ensure that only the cluster is capable of activating the LVM volume group that contains the logical volume my_lv , and that the volume group will not be activated outside of the cluster on startup, as described in Section 3.4, "Exclusive Activation of a Volume Group in a Cluster" . Create the cluster resources as described in Section 3.5, "Configuring the Cluster Resources" . Test the NFS server you have configured, as described in Section 3.6, "Testing the Resource Configuration" . 3.1. Creating the NFS Cluster Use the following procedure to install and create the NFS cluster. Install the cluster software on nodes z1.example.com and z2.example.com , using the procedure provided in Section 1.1, "Cluster Software Installation" . Create the two-node cluster that consists of z1.example.com and z2.example.com , using the procedure provided in Section 1.2, "Cluster Creation" . As in that example procedure, this use case names the cluster my_cluster . Configure fencing devices for each node of the cluster, using the procedure provided in Section 1.3, "Fencing Configuration" . This example configures fencing using two ports of the APC power switch with a host name of zapc.example.com .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa
function::target
function::target Name function::target - Return the process ID of the target process Synopsis Arguments None Description This function returns the process ID of the target process. This is useful in conjunction with the -x PID or -c CMD command-line options to stap. An example of its use is to create scripts that filter on a specific process. -x <pid> target returns the pid specified by -x target returns the pid for the executed command specified by -c
[ "target:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-target
Chapter 123. AclRuleTransactionalIdResource schema reference
Chapter 123. AclRuleTransactionalIdResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTransactionalIdResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource . It must have the value transactionalId for the type AclRuleTransactionalIdResource . Property Property type Description type string Must be transactionalId . name string Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. patternType string (one of [prefix, literal]) Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-aclruletransactionalidresource-reference
Builds using BuildConfig
Builds using BuildConfig OpenShift Container Platform 4.18 Builds Red Hat OpenShift Documentation Team
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"", "source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4", "source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1", "source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar", "oc secrets link builder dockerhub", "source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3", "source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'", "kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'", "apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"", "oc set build-secret --source bc/sample-build basicsecret", "oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>", "[http] sslVerify=false", "cat .gitconfig", "[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt", "oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth", "ssh-keygen -t ed25519 -C \"[email protected]\"", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth", "cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt", "oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth", "oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "oc create -f <filename>", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <your_yaml_file>.yaml", "oc logs secret-example-pod", "oc delete pod secret-example-pod", "apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username", "oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>", "apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>", "oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth", "apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"", "FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]", "#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar", "#!/bin/sh exec java -jar app.jar", "FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]", "auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"", "oc set build-secret --push bc/sample-build dockerhub", "oc secrets link builder dockerhub", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"", "oc set build-secret --pull bc/sample-build dockerhub", "oc secrets link builder dockerhub", "env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret", "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"", "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"", "strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"", "customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "oc set env <enter_variables>", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline", "FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]", "FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build", "#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}", "oc new-build --binary --strategy=docker --name custom-builder-image", "oc start-build custom-builder-image --from-dir . -F", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest", "oc create -f buildconfig.yaml", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}", "oc create -f imagestream.yaml", "oc start-build sample-custom-build -F", "oc start-build <buildconfig_name>", "oc start-build --from-build=<build_name>", "oc start-build <buildconfig_name> --follow", "oc start-build <buildconfig_name> --env=<key>=<value>", "oc start-build hello-world --from-repo=../hello-world --commit=v2", "oc cancel-build <build_name>", "oc cancel-build <build1_name> <build2_name> <build3_name>", "oc cancel-build bc/<buildconfig_name>", "oc cancel-build bc/<buildconfig_name>", "oc delete bc <BuildConfigName>", "oc delete --cascade=false bc <BuildConfigName>", "oc describe build <build_name>", "oc describe build <build_name>", "oc logs -f bc/<buildconfig_name>", "oc logs --version=<number> bc/<buildconfig_name>", "sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: webhook-access-unauthenticated namespace: <namespace> 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: \"system:webhook\" subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: \"system:unauthenticated\"", "oc apply -f add-webhooks-unauth.yaml", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name_of_your_BuildConfig>", "https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2", "resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"", "spec: completionDeadlineSeconds: 1800", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \\USDkey, \\USDvalue := .data }} {{ \\USDkey }}: {{ \\USDvalue }} {{ end }} EOF oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f -", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem", "oc create configmap yum-repos-d --from-file /path/to/satellite.repo", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF", "oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI", "oc start-build uid-wrapper-rhel9 -n build-namespace -F", "oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite", "oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated", "oc get clusterrole admin -o yaml | grep \"builds/docker\"", "oc get clusterrole edit -o yaml | grep \"builds/docker\"", "oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject", "oc edit build.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists", "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/builds_using_buildconfig/index
Chapter 44. local
Chapter 44. local This chapter describes the commands under the local command. 44.1. local ip association create Create Local IP Association Usage: Table 44.1. Positional arguments Value Summary <local-ip> Local ip that the port association belongs to (name or ID) <fixed-port> The id or name of port to allocate local ip Association Table 44.2. Command arguments Value Summary -h, --help Show this help message and exit --fixed-ip <fixed-ip> Fixed ip for local ip association --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 44.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.2. local ip association delete Delete Local IP association(s) Usage: Table 44.7. Positional arguments Value Summary <local-ip> Local ip that the port association belongs to (name or id) <fixed-port-id> The fixed port id of local ip association Table 44.8. Command arguments Value Summary -h, --help Show this help message and exit 44.3. local ip association list List Local IP Associations Usage: Table 44.9. Positional arguments Value Summary <local-ip> Local ip that port associations belongs to Table 44.10. Command arguments Value Summary -h, --help Show this help message and exit --fixed-port <fixed-port> Filter the list result by the id or name of the fixed port --fixed-ip <fixed-ip> Filter the list result by fixed ip --host <host> Filter the list result by given host --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.11. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 44.12. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 44.13. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.4. local ip create Create Local IP Usage: Table 44.15. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New local ip name --description <description> New local ip description --network <network> Network to allocate local ip (name or id) --local-port <local-port> Port to allocate local ip (name or id) --local-ip-address <local-ip-address> Ip address or cidr --ip-mode <ip-mode> Local ip ip mode --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 44.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.5. local ip delete Delete local IP(s) Usage: Table 44.20. Positional arguments Value Summary <local-ip> Local ip(s) to delete (name or id) Table 44.21. Command arguments Value Summary -h, --help Show this help message and exit 44.6. local ip list List local IPs Usage: Table 44.22. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only local ips of given name in output --project <project> List local ips according to their project (name or id) --network <network> List local ip(s) according to given network (name or ID) --local-port <local-port> List local ip(s) according to given port (name or id) --local-ip-address <local-ip-address> List local ip(s) according to given local ip address --ip-mode <ip_mode> List local ip(s) according to given ip mode --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.23. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 44.24. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 44.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.7. local ip set Set local ip properties Usage: Table 44.27. Positional arguments Value Summary <local-ip> Local ip to modify (name or id) Table 44.28. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set local ip name --description <description> Set local ip description 44.8. local ip show Display local IP details Usage: Table 44.29. Positional arguments Value Summary <local-ip> Local ip to display (name or id) Table 44.30. Command arguments Value Summary -h, --help Show this help message and exit Table 44.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 44.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack local ip association create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--fixed-ip <fixed-ip>] [--project-domain <project-domain>] <local-ip> <fixed-port>", "openstack local ip association delete [-h] <local-ip> <fixed-port-id> [<fixed-port-id> ...]", "openstack local ip association list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--fixed-port <fixed-port>] [--fixed-ip <fixed-ip>] [--host <host>] [--project-domain <project-domain>] <local-ip>", "openstack local ip create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--network <network>] [--local-port <local-port>] [--local-ip-address <local-ip-address>] [--ip-mode <ip-mode>] [--project-domain <project-domain>]", "openstack local ip delete [-h] <local-ip> [<local-ip> ...]", "openstack local ip list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--project <project>] [--network <network>] [--local-port <local-port>] [--local-ip-address <local-ip-address>] [--ip-mode <ip_mode>] [--project-domain <project-domain>]", "openstack local ip set [-h] [--name <name>] [--description <description>] <local-ip>", "openstack local ip show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <local-ip>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/local
Chapter 7. OVN-Kubernetes network plugin
Chapter 7. OVN-Kubernetes network plugin 7.1. About the OVN-Kubernetes network plugin The OpenShift Dedicated cluster uses a virtualized network for pod and service networks. Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Dedicated. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration. Note OVN-Kubernetes is the default networking solution for OpenShift Dedicated and single-node OpenShift deployments. OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website . OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device so that network administrators can configure, manage, and monitor the flow of network traffic. OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow . OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet so that the server can respond with gateway, DNS server, IP address, and other information. OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features: egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading, and multicast. 7.1.1. OVN-Kubernetes purpose The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies: OVN to manage network traffic flows. Kubernetes network policy support and logs, including ingress and egress rules. The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes. The OVN-Kubernetes network plugin supports the following capabilities: Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking . Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading . IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power(R), IBM Z(R), and Red Hat OpenStack Platform (RHOSP) platforms. IPv6 single-stack networking on RHOSP and bare metal platforms. IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an RHOSP platform. Egress firewall devices and egress IP addresses. Egress router devices that operate in redirect mode. IPsec encryption of intracluster communications. 7.1.2. OVN-Kubernetes IPv6 and dual-stack limitations The OVN-Kubernetes network plugin has the following limitations: For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4 The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway. For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface The only resolution is to reconfigure the host networking so that both IP families contain the default gateway. 7.1.3. Session affinity Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client's IP address, see Session affinity . Stickiness timeout for session affinity The OVN-Kubernetes network plugin for OpenShift Dedicated calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter. 7.2. Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin As an OpenShift Dedicated cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the OCM CLI. Some considerations before starting migration initiation are: The cluster version must be 4.16.24 and above. The migration process cannot be interrupted. Migrating back to the SDN network plugin is not possible. Cluster nodes will be rebooted during migration. There will be no impact to workloads that are resilient to node disruptions. Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations. 7.2.1. Initiating migration using the OpenShift Cluster Manager API command-line interface (ocm) CLI Warning You can only initiate migration on clusters that are version 4.16.24 and above. Prerequisites You installed the OpenShift Cluster Manager API command-line interface ( ocm ) . Important OpenShift Cluster Manager API command-line interface ( ocm ) is a Developer Preview feature only. For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope . Procedure Create a JSON file with the following content: { "type": "sdnToOvn" } Optional: Within the JSON file, you can configure internal subnets using any or all of the options join , masquerade , and transit , along with a single CIDR per option, as shown in the following example: { "type": "sdnToOvn", "sdn_to_ovn": { "transit_ipv4": "192.168.255.0/24", "join_ipv4": "192.168.255.0/24", "masquerade_ipv4": "192.168.255.0/24" } } Note OVN-Kubernetes reserves the following IP address ranges: 100.64.0.0/16 . This IP address range is used for the internalJoinSubnet parameter of OVN-Kubernetes by default. 100.88.0.0/16 . This IP address range is used for the internalTransSwitchSubnet parameter of OVN-Kubernetes by default. If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. For more information, see Patching OVN-Kubernetes address ranges in the Additional resources section. To initiate the migration, run the following post request in a terminal window: USD ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations 1 --body=myjsonfile.json 2 1 Replace {cluster_id} with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin. 2 Replace myjsonfile.json with the name of the JSON file you created in the step. Example output { "kind": "ClusterMigration", "href": "/api/clusters_mgmt/v1/clusters/2gnts65ra30sclb114p8qdc26g5c8o3e/migrations/2gois8j244rs0qrfu9ti2o790jssgh9i", "id": "7sois8j244rs0qrhu9ti2o790jssgh9i", "cluster_id": "2gnts65ra30sclb114p8qdc26g5c8o3e", "type": "sdnToOvn", "state": { "value": "scheduled", "description": "" }, "sdn_to_ovn": { "transit_ipv4": "100.65.0.0/16", "join_ipv4": "100.66.0.0/16" }, "creation_timestamp": "2025-02-05T14:56:34.878467542Z", "updated_timestamp": "2025-02-05T14:56:34.878467542Z" } Verification To check the status of the migration, run the following command: USD ocm get cluster USDcluster_id/migration 1 1 Replace USDcluster_id with the ID of the cluster that the migration was applied to. Additional resources Patching OVN-Kubernetes address ranges
[ "I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4", "I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface", "{ \"type\": \"sdnToOvn\" }", "{ \"type\": \"sdnToOvn\", \"sdn_to_ovn\": { \"transit_ipv4\": \"192.168.255.0/24\", \"join_ipv4\": \"192.168.255.0/24\", \"masquerade_ipv4\": \"192.168.255.0/24\" } }", "ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations 1 --body=myjsonfile.json 2", "{ \"kind\": \"ClusterMigration\", \"href\": \"/api/clusters_mgmt/v1/clusters/2gnts65ra30sclb114p8qdc26g5c8o3e/migrations/2gois8j244rs0qrfu9ti2o790jssgh9i\", \"id\": \"7sois8j244rs0qrhu9ti2o790jssgh9i\", \"cluster_id\": \"2gnts65ra30sclb114p8qdc26g5c8o3e\", \"type\": \"sdnToOvn\", \"state\": { \"value\": \"scheduled\", \"description\": \"\" }, \"sdn_to_ovn\": { \"transit_ipv4\": \"100.65.0.0/16\", \"join_ipv4\": \"100.66.0.0/16\" }, \"creation_timestamp\": \"2025-02-05T14:56:34.878467542Z\", \"updated_timestamp\": \"2025-02-05T14:56:34.878467542Z\" }", "ocm get cluster USDcluster_id/migration 1" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/networking/ovn-kubernetes-network-plugin
Chapter 6. Installing RHEL AI on Azure
Chapter 6. Installing RHEL AI on Azure There are multiple ways you can install, and deploy Red Hat Enterprise Linux AI on Azure. You can purchase RHEL AI from the Azure marketplace . You can download the RHEL AI VHD on the RHEL AI download page and convert it to an Azure image. For installing and deploying Red Hat Enterprise Linux AI on Azure using the VHD, you must first convert the RHEL AI image into an Azure image. You can then launch an instance using the Azure image and deploy RHEL AI on an Azure machine. 6.1. Converting the RHEL AI image into a Azure image To create a bootable image on Azure you must configure your Azure account, create an Azure Storage Container, and create an Azure image using the RHEL AI raw image. Prerequisites You installed the Azure CLI on your specific machine. For more information on installing the Azure CLI, see Install the Azure CLI on Linux . You installed the AzCopy on your specific machine. For more information on installing AzCopy, see Install AzCopy on Linux . Procedure Log in to Azure by running the following command: USD az login Example output of the login USD az login A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`. [ { "cloudName": "AzureCloud", "homeTenantId": "c7b976df-89ce-42ec-b3b2-a6b35fd9c0be", "id": "79d7df51-39ec-48b9-a15e-dcf59043c84e", "isDefault": true, "managedByTenants": [], "name": "Team Name", "state": "Enabled", "tenantId": "0a873aea-428f-47bd-9120-73ce0c5cc1da", "user": { "name": "[email protected]", "type": "user" } } ] Log in with the azcopy tool using the following commands: USD keyctl new_session USD azcopy login You need to set up various Azure configurations and create your Azure Storage Container before creating the Azure image. Create an environment variable defining the location of your instance with the following command: USD az_location=eastus Create a resource group and save the name in an environment variable named az_resource_group . The following example creates a resource group named Default in the location eastus . (You can omit this step if you want to use an already existing resource group). USD az_resource_group=Default USD az group create --name USD{az_resource_group} --location USD{az_location} Create an Azure storage account and save the name in an environment variable named az_storage_account by running the following commands: USD az_storage_account=THE_NAME_OF_YOUR_STORAGE_ACCOUNT USD az storage account create \ --name USD{az_storage_account} \ --resource-group USD{az_resource_group} \ --location USD{az_location} \ --sku Standard_LRS Create your Azure Storage Container named as the environment variable az_storage_container with the following commands: USD az_storage_container=NAME_OF_MY_BUCKET USD az storage container create \ --name USD{az_storage_container} \ --account-name USD{az_storage_account} \ --public-access off You can get your Subscription ID from the Azure account list by running the following command: USD az account list --output table Create a variable named ` az_subscription_id` with your Subscription ID . USD az_subscription_id=46c08fb3-83c5-4b59-8372-bf9caf15a681 Grant azcopy write permission to user into the storage container. This example grants permission to the user [email protected] . USD az role assignment create \ --assignee [email protected] \ --role "Storage Blob Data Contributor" \ --scope /subscriptions/USD{az_subscription_id}/resourceGroups/USD{az_resource_group}/providers/Microsoft.Storage/storageAccounts/USD{az_storage_account}/blobServices/default/containers/USD{az_storage_container} Now that your Azure storage container is set up, you need to download the Azure VHD image from Red Hat Enterprise Linux AI download page . Set the name you want to use as the RHEL AI Azure image. USD image_name=rhel-ai-1.3 Upload the VHD file to the Azure Storage Container by running the following command: USD az_vhd_url="https://USD{az_storage_account}.blob.core.windows.net/USD{az_storage_container}/USD(basename USD{vhd_file})" USD azcopy copy "USDvhd_file" "USDaz_vhd_url" Create an Azure image from the VHD file you just uploaded with the following command: USD az image create --resource-group USDaz_resource_group \ --name "USDimage_name" \ --source "USD{az_vhd_url}" \ --location USD{az_location} \ --os-type Linux \ --hyper-v-generation V2 6.2. Deploying your instance on Azure using the CLI You can launch an instance with your new RHEL AI Azure image from the Azure web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an Azure instance with the custom Azure image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI Azure image. For more information, see "Converting the RHEL AI image to an Azure image". You installed the Azure CLI on your specific machine, see Install the Azure CLI on Linux . Procedure Log in to your Azure account by running the following command: USD az login You need to select the instance profile that you want to use for the deployment. List all the profiles in the desired region by running the following command: USD az vm list-sizes --location <region> --output table Make a note of your preferred instance profile, you will need it for your instance deployment. You can now start creating your Azure instance. Populate environment variables for when you create the instance. name=my-rhelai-instance az_location=eastus az_resource_group=my_resource_group az_admin_username=azureuser az_vm_size=Standard_ND96isr_H100_v5 az_image=my-custom-rhelai-image sshpubkey=USDHOME/.ssh/id_rsa.pub disk_size=1024 You can launch your instance, by running the following command: USD az vm create \ --resource-group USDaz_resource_group \ --name USD{name} \ --image USD{az_image} \ --size USD{az_vm_size} \ --location USD{az_location} \ --admin-username USD{az_admin_username} \ --ssh-key-values @USDsshpubkey \ --authentication-type ssh \ --nic-delete-option Delete \ --accelerated-networking true \ --os-disk-size-gb 1024 \ --os-disk-name USD{name}-USD{az_location} Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train
[ "az login", "az login A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`. [ { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"c7b976df-89ce-42ec-b3b2-a6b35fd9c0be\", \"id\": \"79d7df51-39ec-48b9-a15e-dcf59043c84e\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Team Name\", \"state\": \"Enabled\", \"tenantId\": \"0a873aea-428f-47bd-9120-73ce0c5cc1da\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "keyctl new_session azcopy login", "az_location=eastus", "az_resource_group=Default az group create --name USD{az_resource_group} --location USD{az_location}", "az_storage_account=THE_NAME_OF_YOUR_STORAGE_ACCOUNT", "az storage account create --name USD{az_storage_account} --resource-group USD{az_resource_group} --location USD{az_location} --sku Standard_LRS", "az_storage_container=NAME_OF_MY_BUCKET az storage container create --name USD{az_storage_container} --account-name USD{az_storage_account} --public-access off", "az account list --output table", "az_subscription_id=46c08fb3-83c5-4b59-8372-bf9caf15a681", "az role assignment create --assignee [email protected] --role \"Storage Blob Data Contributor\" --scope /subscriptions/USD{az_subscription_id}/resourceGroups/USD{az_resource_group}/providers/Microsoft.Storage/storageAccounts/USD{az_storage_account}/blobServices/default/containers/USD{az_storage_container}", "image_name=rhel-ai-1.3", "az_vhd_url=\"https://USD{az_storage_account}.blob.core.windows.net/USD{az_storage_container}/USD(basename USD{vhd_file})\" azcopy copy \"USDvhd_file\" \"USDaz_vhd_url\"", "az image create --resource-group USDaz_resource_group --name \"USDimage_name\" --source \"USD{az_vhd_url}\" --location USD{az_location} --os-type Linux --hyper-v-generation V2", "az login", "az vm list-sizes --location <region> --output table", "name=my-rhelai-instance az_location=eastus az_resource_group=my_resource_group az_admin_username=azureuser az_vm_size=Standard_ND96isr_H100_v5 az_image=my-custom-rhelai-image sshpubkey=USDHOME/.ssh/id_rsa.pub disk_size=1024", "az vm create --resource-group USDaz_resource_group --name USD{name} --image USD{az_image} --size USD{az_vm_size} --location USD{az_location} --admin-username USD{az_admin_username} --ssh-key-values @USDsshpubkey --authentication-type ssh --nic-delete-option Delete --accelerated-networking true --os-disk-size-gb 1024 --os-disk-name USD{name}-USD{az_location}", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/installing/installing_azure
Chapter 14. Finding Directory Entries
Chapter 14. Finding Directory Entries Entries in the directory can be searched for and found using the command line or the web console. 14.1. Finding Directory Entries Using the Command Line You can use the ldapsearch command-line utility to search for directory entries. This utility opens a connection to a specified server using the specified identity and credentials and locates entries based on a specified search filter. The search scope can include: a single entry ( -s base ) an entry immediate subentries ( -s one ) an entire tree or subtree ( -s sub ) Note A common mistake is to assume that the directory is searched based on the attributes used in the distinguished name. The distinguished name is only a unique identifier for the directory entry and cannot be used as a search key. Instead, search for entries based on the attribute-data pairs stored on the entry itself. Thus, if the distinguished name of an entry is uid=bjensen,ou=People,dc=example,dc=com , then a search for dc=example does not match that entry unless dc:example has explicitly been added as an attribute in that entry. Search results are returned in LDIF format. LDIF is defined in RFC 2849 and is described in detail in Appendix B, LDAP Data Interchange Format . This section contains information about the following topics: Section 14.1.1, "ldapsearch Command-Line Format" Section 14.1.2, "Commonly Used ldapsearch Options" Section 14.1.3, "Using Special Characters" 14.1.1. ldapsearch Command-Line Format The ldapsearch command must use the following format: Either -x (to use simple binds) or -Y (to set the SASL mechanism) must be used to configure the type of connection. options is a series of command-line options. These must be specified before the search filter, if any are used. search_filter is an LDAP search filter as described in Section 14.3, "LDAP Search Filters" . Do not specify a separate search filter if search filters are specified in a file using the -f option. list_of_attributes is a list of attributes separated by a space. Specifying a list of attributes reduces the number of attributes returned in the search results. This list of attributes must appear after the search filter. For an example, see Section 14.4.6, "Displaying Subsets of Attributes" . If a list of attributes is not specified, the search returns values for all attributes permitted by the access control set in the directory, with the exception of operational attributes. For operational attributes to be returned as a result of a search operation, they must be explicitly specified in the search command. To return all operational attributes of an object specify + . To retrieve regular attributes in addition to explicitly specified operational attributes, use an asterisk (*) in the list of attributes in the ldapsearch command. To retrieve only a list of matching DNs, use the special attribute 1.1 . For example: 14.1.2. Commonly Used ldapsearch Options The following table lists the most commonly used ldapsearch command-line options. If a specified value contains a space ( ), the value should be surrounded by single or double quotation marks, such as -b "cn=My Special Group,ou=groups,dc=example,dc=com" . Important The ldapsearch utility from OpenLDAP uses SASL connections by default. To perform a simple bind or to use TLS, use the -x argument to disable SASL and allow other connection methods. Option Description -b Specifies the starting point for the search. The value specified here must be a distinguished name that currently exists in the database. This is optional if the LDAP_BASEDN environment variable has been set to a base DN. The value specified in this option should be provided in single or double quotation marks. For example: To search the root DSE entry, specify an empty string here, such as -b "" . -D Specifies the distinguished name with which to authenticate to the server. This is optional if anonymous access is supported by the server. If specified, this value must be a DN recognized by the Directory Server, and it must also have the authority to search for the entries. For example, -D "uid= user_name ,dc=example,dc=com" . -H Specifies an LDAP URL to use to connect to the server. For a traditional LDAP URL, this has the following format: The port is optional; it will use the default LDAP port of 389 or LDAPS port of 636 if the port is not given. This can also use an LDAPI URL, with each element separated by the HTML hex code %2F , rather than a forward slash (/): For LDAPI, specify the full path and filename of the LDAPI socket the server is listening to. Since this value is interpreted as an LDAP URL, the forward slash characters (/) in the path and filename must be escaped encoded as the URL escape value %2F . The -H option is used instead of -h and -p . -h Specifies the host name or IP address of the machine on which the Directory Server is installed. For example, -h server.example.com . If a host is not specified, ldapsearch uses the localhost. Note Directory Server supports both IPv4 and IPv6 addresses. -l Specifies the maximum number of seconds to wait for a search request to complete. For example, -l 300 . The default value for the nsslapd-timelimit attribute is 3600 seconds. Regardless of the value specified, ldapsearch will never wait longer than is allowed by the server's nsslapd-timelimit attribute. -p Specifies the TCP port number that the Directory Server uses. For example, -p 1049 . The default is 389 . If -h is specified, -p must also be specified, even if it gives the default value. -s scope Specifies the scope of the search. The scope can be one of the following: base searches only the entry specified in the -b option or defined by the LDAP_BASEDN environment variable. one searches only the immediate children of the entry specified in the -b option. Only the children are searched; the actual entry specified in the -b option is not searched. sub searches the entry specified in the -b option and all of its descendants; that is, perform a subtree search starting at the point identified in the -b option. This is the default. -W Prompt for the password. If this option is not set, anonymous access is used. Alternatively, use the -w option to pass the password to the utility. Note that the password can be visible in the process list for other users and is saved in the shell's history. -x Disables the default SASL connection to allow simple binds. -Y SASL_mechanism Sets the SASL mechanism to use for authentication. If no mechanism is set, ldapsearch selects the best mechanism supported by the server. If If -x is not used, then the -Y option must be used. -z number Sets the maximum number of entries to return in a response to a search request. This value overwrites the server-side nsslapd-sizelimit parameter when binding using the root DN. 14.1.3. Using Special Characters When using the ldapsearch command-line utility, it may be necessary to specify values that contain characters that have special meaning to the command-line interpreter, such as space ( ), asterisk (*), or backslash (\). Enclose the value which has the special character in quotation marks (""). For example: Depending on the command-line interpreter, use either single or double quotation marks. In general, use single quotation marks (') to enclose values. Use double quotation marks (") to allow variable interpolation if there are shell variables. Refer to the operating system documentation for more information.
[ "ldapsearch [-x | -Y mechanism ] [ options ] [ search_filter ] [ list_of_attributes ]", "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"dc=example,dc=com\" -x \"(objectclass=inetorgperson)\" 1.1", "-b \"cn= user ,ou=Product Development,dc=example,dc=com\"", "ldap[s]:// hostname [: port ]", "ldapi://%2F full %2F path %2F to %2Fslapd-example.socket", "-D \"cn= user_name ,ou=Product Development,dc=example,dc=com\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Finding_Directory_Entries
4.6. Importing the Image into Google Compute Engine
4.6. Importing the Image into Google Compute Engine Use the following command to import the image to Google Compute Engine: For information on using the saved image, see https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#use_saved_image .
[ "gcloud compute images create rhgs31 --source-uri gs://rhgs_image_upload/disk.raw.tar.gz" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-documentation-deployment_guide_for_public_cloud-google_cloud_platform-import_image
Chapter 5. Clair security scanner
Chapter 5. Clair security scanner 5.1. Clair configuration overview Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example: USD clair -conf ./path/to/config.yaml -mode indexer or USD clair -conf ./path/to/config.yaml -mode matcher The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities. If you are running Clair in combo mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration. 5.1.1. Information about using Clair in a proxy environment Environment variables respected by the Go standard library can be specified if needed, for example: HTTP_PROXY USD export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port> HTTPS_PROXY . USD export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port> SSL_CERT_DIR USD export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates> NO_PROXY USD export NO_PROXY=<comma_separated_list_of_hosts_and_domains> If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv updater requires access to https://osv-vulnerabilities.storage.googleapis.com to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs". You must also ensure that the standard Clair URLs are added to the proxy allowlist: https://search.maven.org/solrsearch/select https://catalog.redhat.com/api/containers/ https://access.redhat.com/security/data/metrics/repository-to-cpe.json https://access.redhat.com/security/data/metrics/container-name-repos-map.json When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy. 5.1.2. Clair configuration reference The following YAML shows an example Clair configuration: http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: "" Note The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally. 5.1.3. Clair general fields The following table describes the general configuration fields available for a Clair deployment. Field Typhttp_listen_ae Description http_listen_addr String Configures where the HTTP API is exposed. Default: :6060 introspection_addr String Configures where Clair's metrics and health endpoints are exposed. log_level String Sets the logging level. Requires one of the following strings: debug-color , debug , info , warn , error , fatal , panic tls String A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. .cert String The TLS certificate to be used. Must be a full-chain certificate. Example configuration for general Clair fields The following example shows a Clair configuration. Example configuration for general Clair fields # ... http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info # ... 5.1.4. Clair indexer configuration fields The following table describes the configuration fields for Clair's indexer component. Field Type Description indexer Object Provides Clair indexer node configuration. .airgap Boolean Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .index_report_request_concurrency Integer Rate limits the number of index report creation requests. Setting this to 0 attemps to auto-size this value. Setting a negative value means unlimited. The auto-sizing is a multiple of the number of available cores. The API returns a 429 status code if concurrency is exceeded. .scanlock_retry Integer A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. .layer_scan_concurrency Integer Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest's layer concurrently. This value tunes the number of layers an indexer scans in parallel. .migrations Boolean Whether indexer nodes handle migrations to their database. .scanner String Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. .scanner.dist String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.package String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.repo String A map with the name of a particular scanner and arbitrary YAML as a value. Example indexer configuration The following example shows a hypothetical indexer configuration for Clair. Example indexer configuration # ... indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true # ... 5.1.5. Clair matcher configuration fields The following table describes the configuration fields for Clair's matcher component. Note Differs from matchers configuration fields. Field Type Description matcher Object Provides Clair matcher node configuration. .cache_age String Controls how long users should be hinted to cache responses for. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .max_conn_pool Integer Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. .indexer_addr String A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required. Defaults to 30m . .migrations Boolean Whether matcher nodes handle migrations to their databases. .period String Determines how often updates for new security advisories take place. Defaults to 6h . .disable_updaters Boolean Whether to run background updates or not. Default: False .update_retention Integer Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints. Defaults to 10m . If a value of less than 0 is provided, garbage collection is disabled. 2 is the minimum value to ensure updates can be compared to notifications. Example matcher configuration Example matcher configuration # ... matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2 # ... 5.1.6. Clair matchers configuration fields The following table describes the configuration fields for Clair's matchers component. Note Differs from matcher configuration fields. Table 5.1. Matchers configuration fields Field Type Description matchers Array of strings Provides configuration for the in-tree matchers . .names String A list of string values informing the matcher factory about enabled matchers. If value is set to null , the default list of matchers run. The following strings are accepted: alpine-matcher , aws-matcher , debian-matcher , gobin , java-maven , oracle , photon , python , rhel , rhel-container-matcher , ruby , suse , ubuntu-matcher .config String Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: Example matchers configuration The following example shows a hypothetical Clair deployment that only requires only the alpine , aws , debian , oracle matchers. Example matchers configuration # ... matchers: names: - "alpine-matcher" - "aws" - "debian" - "oracle" # ... 5.1.7. Clair updaters configuration fields The following table describes the configuration fields for Clair's updaters component. Table 5.2. Updaters configuration fields Field Type Description updaters Object Provides configuration for the matcher's update manager. .sets String A list of values informing the update manager which updaters to run. If value is set to null , the default set of updaters runs the following: alpine , aws , clair.cvss , debian , oracle , photon , osv , rhel , rhcc suse , ubuntu If left blank, zero updaters run. .config String Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set's constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". Example updaters configuration In the following configuration, only the rhel set is configured. The ignore_unpatched variable, which is specific to the rhel updater, is also defined. Example updaters configuration # ... updaters: sets: - rhel config: rhel: ignore_unpatched: false # ... 5.1.8. Clair notifier configuration fields The general notifier configuration fields for Clair are listed below. Field Type Description notifier Object Provides Clair notifier node configuration. .connstring String Postgres connection string. Accepts format as URL, or libpq connection string. .migrations Boolean Whether notifier nodes handle migrations to their database. .indexer_addr String A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. .matcher_addr String A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. .poll_interval String The frequency at which the notifier will query a matcher for update operations. .delivery_interval String The frequency at which the notifier attempts delivery of created, or previously failed, notifications. .disable_summary Boolean Controls whether notifications should be summarized to one per manifest. Example notifier configuration The following notifier snippet is for a minimal configuration. Example notifier configuration # ... notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" headers: "" amqp: null stomp: null # ... 5.1.8.1. Clair webhook configuration fields The following webhook fields are available for the Clair notifier environment. Table 5.3. Clair webhook fields .webhook Object Configures the notifier for webhook delivery. .webhook.target String URL where the webhook will be delivered. .webhook.callback String The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. .webhook.headers String A map associating a header name to a list of values. Example webhook configuration Example webhook configuration # ... notifier: # ... webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" # ... 5.1.8.2. Clair amqp configuration fields The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment. .amqp Object Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== .amqp.direct Boolean If true , the notifier will deliver individual notifications (not a callback) to the configured AMQP broker. .amqp.rollup Integer When amqp.direct is set to true , this value informs the notifier of how many notifications to send in a direct delivery. For example, if direct is set to true , and amqp.rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .amqp.exchange Object The AMQP exchange to connect to. .amqp.exchange.name String The name of the exchange to connect to. .amqp.exchange.type String The type of the exchange. Typically one of the following: direct , fanout , topic , headers . .amqp.exchange.durability Boolean Whether the configured queue is durable. .amqp.exchange.auto_delete Boolean Whether the configured queue uses an auto_delete_policy . .amqp.routing_key String The name of the routing key each notification is sent with. .amqp.callback String If amqp.direct is set to false , this URL is provided in the notification callback sent to the broker. This URL should point to Clair's notification API endpoint. .amqp.uris String A list of one or more AMQP brokers to connect to, in priority order. .amqp.tls Object Configures TLS/SSL connection to an AMQP broker. .amqp.tls.root_ca String The filesystem path where a root CA can be read. .amqp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. [NOTE] ==== Clair also allows SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .amqp.tls.key String The filesystem path where a TLS/SSL private key can be read. Example AMQP configuration The following example shows a hypothetical AMQP configuration for Clair. Example AMQP configuration # ... notifier: # ... amqp: exchange: name: "" type: "direct" durable: true auto_delete: false uris: ["amqp://user:pass@host:10000/vhost"] direct: false routing_key: "notifications" callback: "http://clair-notifier/notifier/api/v1/notifications" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 5.1.8.3. Clair STOMP configuration fields The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment. .stomp Object Configures the notifier for STOMP delivery. .stomp.direct Boolean If true , the notifier delivers individual notifications (not a callback) to the configured STOMP broker. .stomp.rollup Integer If stomp.direct is set to true , this value limits the number of notifications sent in a single direct delivery. For example, if direct is set to true , and rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .stomp.callback String If stomp.callback is set to false , the provided URL in the notification callback is sent to the broker. This URL should point to Clair's notification API endpoint. .stomp.destination String The STOMP destination to deliver notifications to. .stomp.uris String A list of one or more STOMP brokers to connect to in priority order. .stomp.tls Object Configured TLS/SSL connection to STOMP broker. .stomp.tls.root_ca String The filesystem path where a root CA can be read. [NOTE] ==== Clair also respects SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .stomp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. .stomp.tls.key String The filesystem path where a TLS/SSL private key can be read. .stomp.user String Configures login details for the STOMP broker. .stomp.user.login String The STOMP login to connect with. .stomp.user.passcode String The STOMP passcode to connect with. Example STOMP configuration The following example shows a hypothetical STOMP configuration for Clair. Example STOMP configuration # ... notifier: # ... stomp: desitnation: "notifications" direct: false callback: "http://clair-notifier/notifier/api/v1/notifications" login: login: "username" passcode: "passcode" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 5.1.9. Clair authorization configuration fields The following authorization configuration fields are available for Clair. Field Type Description auth Object Defines Clair's external and intra-service JWT based authentication. If multiple auth mechanisms are defined, Clair picks one. Currently, multiple mechanisms are unsupported. .psk String Defines pre-shared key authentication. .psk.key String A shared base64 encoded key distributed between all parties signing and verifying JWTs. .psk.iss String A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. Example authorization configuration The following authorization snippet is for a minimal configuration. Example authorization configuration # ... auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: ["quay"] # ... 5.1.10. Clair trace configuration fields The following trace configuration fields are available for Clair. Field Type Description trace Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the application traces will belong to. .probability Integer The probability a trace will occur. .jaeger Object Defines values for Jaeger tracing. .jaeger.agent Object Defines values for configuring delivery to a Jaeger agent. .jaeger.agent.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector Object Defines values for configuring delivery to a Jaeger collector. .jaeger.collector.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector.username String A Jaeger username. .jaeger.collector.password String A Jaeger password. .jaeger.service_name String The service name registered in Jaeger. .jaeger.tags String Key-value pairs to provide additional metadata. .jaeger.buffer_max Integer The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. Example trace configuration The following example shows a hypothetical trace configuration for Clair. Example trace configuration # ... trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" # ... 5.1.11. Clair metrics configuration fields The following metrics configuration fields are available for Clair. Field Type Description metrics Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the metrics in use. .prometheus String Configuration for a Prometheus metrics exporter. .prometheus.endpoint String Defines the path where metrics are served. Example metrics configuration The following example shows a hypothetical metrics configuration for Clair. Example metrics configuration # ... metrics: name: "prometheus" prometheus: endpoint: "/metricsz" # ...
[ "clair -conf ./path/to/config.yaml -mode indexer", "clair -conf ./path/to/config.yaml -mode matcher", "export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>", "export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>", "export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>", "export NO_PROXY=<comma_separated_list_of_hosts_and_domains>", "http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"", "http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info", "indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true", "matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2", "matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"", "updaters: sets: - rhel config: rhel: ignore_unpatched: false", "notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null", "notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"", "notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]", "trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"", "metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/configure_red_hat_quay/clair-vulnerability-scanner
Chapter 5. Connecting to AMQ Management Console for an Operator-based broker deployment
Chapter 5. Connecting to AMQ Management Console for an Operator-based broker deployment Each broker Pod in an Operator-based deployment hosts its own instance of AMQ Management Console at port 8161. The following procedures describe how to connect to AMQ Management Console for a deployed broker. Prerequisites You created a broker deployment using the AMQ Broker Operator. For example, to learn how to use a sample CR to create a basic broker deployment, see Section 3.4.1, "Deploying a basic broker instance" . You enabled access to AMQ Management Console for the brokers in your deployment. For more information about enabling access to AMQ Management Console, see Section 4.6, "Enabling access to AMQ Management Console" . 5.1. Connecting to AMQ Management Console When you enable access to AMQ Management Console in the Custom Resource (CR) instance for your broker deployment, the Operator automatically creates a dedicated Service and Route for each broker Pod to provide access to AMQ Management Console. The default name of the automatically-created Service is in the form <custom-resource-name> -wconsj- <broker-pod-ordinal> -svc . For example, my-broker-deployment-wconsj-0-svc . The default name of the automatically-created Route is in the form <custom-resource-name> -wconsj- <broker-pod-ordinal> -svc-rte . For example, my-broker-deployment-wconsj-0-svc-rte . This procedure shows you how to access the console for a running broker Pod. Procedure In the OpenShift Container Platform web console, click Networking Routes . On the Routes page, identify the wconsj Route for the given broker Pod. For example, my-broker-deployment-wconsj-0-svc-rte . Under Location , click the link that corresponds to the Route. A new tab opens in your web browser. Click the Management Console link. The AMQ Management Console login page opens. Note Credentials are required to log in to AMQ Management Console only if the requireLogin property of the CR is set to true . This property specifies whether login credentials are required to log in to the broker and AMQ Management Console. By default, the requireLogin property is set to false . If requireLogin is set to false , you can log in to AMQ Management Console without supplying a valid username and password by entering any text when prompted for a username and password. If the requireLogin property is set to true , enter a username and password. You can enter the credentials for a preconfigured user that is available for connecting to the broker and AMQ Management Console. You can find these credentials in the adminUser and adminPassword properties if these properties are configured in the Custom Resource (CR) instance. It these properties are not configured in the CR, the Operator automatically generates the credentials. To obtain the automatically generated credentials, see Section 5.2, "Accessing AMQ Management Console login credentials" . If you want to log in as any other user, note that a user must belong to a security role specified for the hawtio.role system property to have the permissions required to log in to AMQ Management Console. The default role for the hawtio.role system property is admin , which the preconfigured user belongs to. 5.2. Accessing AMQ Management Console login credentials If you do not specify a value for adminUser and adminPassword in the Custom Resource (CR) instance used for your broker deployment, the Operator automatically generates these credentials and stores them in a secret. The default secret name is in the form <custom-resource-name> -credentials-secret , for example, my-broker-deployment-credentials-secret . Note Values for adminUser and adminPassword are required to log in to the management console only if the requireLogin parameter of the CR is set to true . If requireLogin is set to false , you can log in to the console without supplying a valid username password by entering any text when prompted for username and password. This procedure shows how to access the login credentials. Procedure See the complete list of secrets in your OpenShift project. From the OpenShift Container Platform web console, click Workload Secrets . From the command line: Open the appropriate secret to reveal the Base64-encoded console login credentials. From the OpenShift Container Platform web console, click the secret that includes your broker Custom Resource instance in its name. Click the YAML tab. From the command line: To decode a value in the secret, use a command such as the following: Additional resources To learn more about using AMQ Management Console to view and manage brokers, see Managing brokers using AMQ Management Console in Managing AMQ Broker .
[ "oc get secrets", "oc edit secret <my-broker-deployment-credentials-secret>", "echo 'dXNlcl9uYW1l' | base64 --decode console_admin" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/deploying_amq_broker_on_openshift/assembly-br-connecting-to-console-operator_broker-ocp
Chapter 32. Basic Data Binding Concepts
Chapter 32. Basic Data Binding Concepts Abstract There are a number of general topics that apply to how Apache CXF handles type mapping. 32.1. Including and Importing Schema Definitions Overview Apache CXF supports the including and importing of schema definitions, using the include and import schema tags. These tags enable you to insert definitions from external files or resources into the scope of a schema element. The essential difference between including and importing is: Including brings in definitions that belong to the same target namespace as the enclosing schema element. Importing brings in definitions that belong to a different target namespace from the enclosing schema element. xsd:include syntax The include directive has the following syntax: The referenced schema, given by anyURI , must either belong to the same target namespace as the enclosing schema, or not belong to any target namespace at all. If the referenced schema does not belong to any target namespace, it is automatically adopted into the enclosing schema's namespace when it is included. Example 32.1, "Example of a Schema that Includes Another Schema" shows an example of an XML Schema document that includes another XML Schema document. Example 32.1. Example of a Schema that Includes Another Schema Example 32.2, "Example of an Included Schema" shows the contents of the included schema file. Example 32.2. Example of an Included Schema xsd:import syntax The import directive has the following syntax: The imported definitions must belong to the namespaceAnyURI target namespace. If namespaceAnyURI is blank or remains unspecified, the imported schema definitions are unqualified. Example 32.3, "Example of a Schema that Imports Another Schema" shows an example of an XML Schema that imports another XML Schema. Example 32.3. Example of a Schema that Imports Another Schema Example 32.4, "Example of an Imported Schema" shows the contents of the imported schema file. Example 32.4. Example of an Imported Schema Using non-referenced schema documents Using types defined in a schema document that is not referenced in the service's WSDL document is a three step process: Convert the schema document to a WSDL document using the xsd2wsdl tool. Generate Java for the types using the wsdl2java tool on the generated WSDL document. Important You will get a warning from the wsdl2java tool stating that the WSDL document does not define any services. You can ignore this warning. Add the generated classes to your classpath. 32.2. XML Namespace Mapping Overview XML Schema type, group, and element definitions are scoped using namespaces. The namespaces prevent possible naming clashes between entities that use the same name. Java packages serve a similar purpose. Therefore, Apache CXF maps the target namespace of a schema document into a package containing the classes necessary to implement the structures defined in the schema document. Package naming The name of the generated package is derived from a schema's target namespace using the following algorithm: The URI scheme, if present, is stripped. Note Apache CXF will only strip the http: , https: , and urn: schemes. For example, the namespace http:\\www.widgetvendor.com\types\widgetTypes.xsd becomes \\widgetvendor.com\types\widgetTypes.xsd . The trailing file type identifier, if present, is stripped. For example, \\www.widgetvendor.com\types\widgetTypes.xsd becomes \\widgetvendor.com\types\widgetTypes . The resulting string is broken into a list of strings using / and : as separators. So, \\www.widgetvendor.com\types\widgetTypes becomes the list {"www.widegetvendor.com", "types", "widgetTypes"} . If the first string in the list is an internet domain name, it is decomposed as follows: The leading www. is stripped. The remaining string is split into its component parts using the . as the separator. The order of the list is reversed. So, {"www.widegetvendor.com", "types", "widgetTypes"} becomes {"com", "widegetvendor", "types", "widgetTypes"} Note Internet domain names end in one of the following: .com , .net , .edu , .org , .gov , or in one of the two-letter country codes. The strings are converted into all lower case. So, {"com", "widegetvendor", "types", "widgetTypes"} becomes {"com", "widegetvendor", "types", "widgettypes"} . The strings are normalized into valid Java package name components as follows: If the strings contain any special characters, the special characters are converted to an underscore( _ ). If any of the strings are a Java keyword, the keyword is prefixed with an underscore( _ ). If any of the strings begin with a numeral, the string is prefixed with an underscore( _ ). The strings are concatenated using . as a separator. So, {"com", "widegetvendor", "types", "widgettypes"} becomes the package name com.widgetvendor.types.widgettypes . The XML Schema constructs defined in the namespace http:\\www.widgetvendor.com\types\widgetTypes.xsd are mapped to the Java package com.widgetvendor.types.widgettypes . Package contents A JAXB generated package contains the following: A class implementing each complex type defined in the schema For more information on complex type mapping see Chapter 35, Using Complex Types . An enum type for any simple types defined using the enumeration facet For more information on how enumerations are mapped see Section 34.3, "Enumerations" . A public ObjectFactory class that contains methods for instantiating objects from the schema For more information on the ObjectFactory class see Section 32.3, "The Object Factory" . A package-info.java file that provides metadata about the classes in the package 32.3. The Object Factory Overview JAXB uses an object factory to provide a mechanism for instantiating instances of JAXB generated constructs. The object factory contains methods for instantiating all of the XML schema defined constructs in the package's scope. The only exception is that enumerations do not get a creation method in the object factory. Complex type factory methods For each Java class generated to implement an XML schema complex type, the object factory contains a method for creating an instance of the class. This method takes the form: For example, if your schema contained a complex type named widgetType , Apache CXF generates a class called WidgetType to implement it. Example 32.5, "Complex Type Object Factory Entry" shows the generated creation method in the object factory. Example 32.5. Complex Type Object Factory Entry Element factory methods For elements that are declared in the schema's global scope, Apache CXF inserts a factory method into the object factory. As discussed in Chapter 33, Using XML Elements , XML Schema elements are mapped to JAXBElement<T> objects. The creation method takes the form: For example if you have an element named comment of type xsd:string , Apache CXF generates the object factory method shown in Example 32.6, "Element Object Factory Entry" Example 32.6. Element Object Factory Entry 32.4. Adding Classes to the Runtime Marshaller Overview When the Apache CXF runtime reads and writes XML data it uses a map that associates the XML Schema types with their representative Java types. By default, the map contains all of the types defined in the target namespace of the WSDL contract's schema element. It also contains any types that are generated from the namespaces of any schemas that are imported into the WSDL contract. The addition of types from namespaces other than the schema namespace used by an application's schema element is accomplished using the @XmlSeeAlso annotation. If your application needs to work with types that are generated outside the scope of your application's WSDL document, you can edit the @XmlSeeAlso annotation to add them to the JAXB map. Using the @XmlSeeAlso annotation The @XmlSeeAlso annotation can be added to the SEI of your service. It contains a comma separated list of classes to include in the JAXB context. Example 32.7, "Syntax for Adding Classes to the JAXB Context" shows the syntax for using the @XmlSeeAlso annotation. Example 32.7. Syntax for Adding Classes to the JAXB Context In cases where you have access to the JAXB generated classes, it is more efficient to use the ObjectFactory classes generated to support the needed types. Including the ObjectFactory class includes all of the classes that are known to the object factory. Example Example 32.8, "Adding Classes to the JAXB Context" shows an SEI annotated with @XmlSeeAlso . Example 32.8. Adding Classes to the JAXB Context
[ "<include schemaLocation=\" anyURI \" />", "<definitions targetNamespace=\"http://schemas.redhat.com/tests/schema_parser\" xmlns:tns=\"http://schemas.redhat.com/tests/schema_parser\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\"> <types> <schema targetNamespace=\"http://schemas.redhat.com/tests/schema_parser\" xmlns=\"http://www.w3.org/2001/XMLSchema\"> <include schemaLocation=\"included.xsd\"/> <complexType name=\"IncludingSequence\"> <sequence> <element name=\"includedSeq\" type=\"tns:IncludedSequence\"/> </sequence> </complexType> </schema> </types> </definitions>", "<schema targetNamespace=\"http://schemas.redhat.com/tests/schema_parser\" xmlns=\"http://www.w3.org/2001/XMLSchema\"> <!-- Included type definitions --> <complexType name=\"IncludedSequence\"> <sequence> <element name=\"varInt\" type=\"int\"/> <element name=\"varString\" type=\"string\"/> </sequence> </complexType> </schema>", "<import namespace=\" namespaceAnyURI \" schemaLocation=\" schemaAnyURI \" />", "<definitions targetNamespace=\"http://schemas.redhat.com/tests/schema_parser\" xmlns:tns=\"http://schemas.redhat.com/tests/schema_parser\" xmlns:imp=\"http://schemas.redhat.com/tests/imported_types\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\"> <types> <schema targetNamespace=\"http://schemas.redhat.com/tests/schema_parser\" xmlns=\"http://www.w3.org/2001/XMLSchema\"> <import namespace=\"http://schemas.redhat.com/tests/imported_types\" schemaLocation=\"included.xsd\"/> <complexType name=\"IncludingSequence\"> <sequence> <element name=\"includedSeq\" type=\"imp:IncludedSequence\"/> </sequence> </complexType> </schema> </types> </definitions>", "<schema targetNamespace=\"http://schemas.redhat.com/tests/imported_types\" xmlns=\"http://www.w3.org/2001/XMLSchema\"> <!-- Included type definitions --> <complexType name=\"IncludedSequence\"> <sequence> <element name=\"varInt\" type=\"int\"/> <element name=\"varString\" type=\"string\"/> </sequence> </complexType> </schema>", "typeName create typeName ();", "public class ObjectFactory { WidgetType createWidgetType() { return new WidgetType(); } }", "public JAXBElement< elementType > create elementName ( elementType value);", "public class ObjectFactory { @XmlElementDecl(namespace = \"...\", name = \"comment\") public JAXBElement<String> createComment(String value) { return new JAXBElement<String>(_Comment_QNAME, String.class, null, value); } }", "import javax.xml.bind.annotation.XmlSeeAlso; @WebService() @XmlSeeAlso ({ Class1 .class, Class2 .class, ..., ClassN .class}) public class GeneratedSEI { }", "import javax.xml.bind.annotation.XmlSeeAlso; @WebService() @XmlSeeAlso({org.apache.schemas.types.test.ObjectFactory.class, org.apache.schemas.tests.group_test.ObjectFactory.class}) public interface Foo { }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwsdatamappingoverview
8.92. libnl
8.92. libnl 8.92.1. RHBA-2013:1730 - libnl bug fix update Updated libnl packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The libnl packages contain a convenience library to simplify using the Linux kernel's Netlink sockets interface for network manipulation. Bug Fixes BZ# 682240 When a domain was started using the libvirt client libraries and utilities, a memory leak was triggered from the libnl library because libnl continued to use memory that was no longer in use. With this update, memory leaks in libnl are fixed, and libnl releases memory after it completes its usage. BZ# 689559 Prior to this update, libnl's error handling made generous use of the strerror() function. Nevertheless, the strerror() function was not threadsafe, and it was possible for multiple threads in an application to call libnl. With this update, all the occurrences of strerror() are replaced with a call to the strerror_r() function that puts the message into a thread-local static buffer. BZ# 953339 When the max_vfs parameter of the igb module, which allocates the maximum number of Virtual Functions, was set to any value greater than 50,50 on a KVM (Kernel-based Virtual Machine) host, the guest failed to start with the following error messages: error : virNetDevParseVfConfig:1484 : internal error missing IFLA_VF_INFO in netlink response error : virFileReadAll:457 : Failed to open file '/var/run/libvirt/qemu/eth0_vf0': No such file or directory error : virFileReadAll:457 : Failed to open file '/var/run/libvirt/qemu/eth1_vf0': No such file or directory This update increases the default receive buffer size to allow receiving of Netlink messages that exceed the size of a memory page. Thus, guests are able to start on the KVM host, and error messages no longer occur in the described scenario. Users of libnl are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libnl
Appendix C. A Reference of Identity Management Files and Logs
Appendix C. A Reference of Identity Management Files and Logs C.1. Identity Management Configuration Files and Directories Table C.1. IdM Server and Client Configuration Files and Directories Directory or File Description /etc/ipa/ The main IdM configuration directory. /etc/ipa/default.conf Primary configuration file for IdM. Referenced when servers and clients start and when the user uses the ipa utility. /etc/ipa/server.conf An optional configuration file, does not exist by default. Referenced when the IdM server starts. If the file exists, it takes precedence over /etc/ipa/default.conf . /etc/ipa/cli.conf An optional configuration file, does not exist by default. Referenced when the user uses the ipa utility. If the file exists, it takes precedence over /etc/ipa/default.conf . /etc/ipa/ca.crt The CA certificate issued by the IdM server's CA. ~/.ipa/ The user-specific IdM directory created on the local system the first time the user runs an IdM command. Users can set individual configuration overrides by creating user-specific default.conf , server.conf , or cli.conf files in ~./ipa/ . /etc/sssd/sssd.conf Configuration for the IdM domain and for IdM services used by SSSD. /usr/share/sssd/sssd.api.d/sssd-ipa.conf A schema of IdM-related SSSD options and their values. /etc/gssproxy/ The directory for the configuration of the GSS-Proxy protocol. The directory contains files for each GSS-API service and a general /etc/gssproxy/gssproxy.conf file. /etc/certmonger/certmonger.conf This configuration file contains default settings for the certmonger daemon that monitors certificates for impending expiration. /etc/custodia/custodia.conf Configuration file for the Custodia service that manages secrets for IdM applications. Table C.2. System Service Files and Directories Directory or File Description /etc/sysconfig/ systemd -specific files Table C.3. Web UI Files and Directories Directory or File Description /etc/ipa/html/ A symbolic link for the HTML files used by the IdM web UI. /etc/httpd/conf.d/ipa.conf Configuration files used by the Apache host for the web UI application. /etc/httpd/conf.d/ipa-rewrite.conf /etc/httpd/conf/ipa.keytab The keytab file used by the web server. /usr/share/ipa/ The directory for all HTML files, scripts, and stylesheets used by the web UI. /usr/share/ipa/ipa.conf /usr/share/ipa/updates/ Contains LDAP data, configuration, and schema updates for IdM. /usr/share/ipa/html/ Contains the HTML files, JavaScript files, and stylesheets used by the web UI. /usr/share/ipa/migration/ Contains HTML pages, stylesheets, and Python scripts used for running the IdM server in migration mode. /usr/share/ipa/ui/ Contains the scripts used by the UI to perform IdM operations. /etc/httpd/conf.d/ipa-pki-proxy.conf The configuration file for web-server-to-Certificate-System bridging. Table C.4. Kerberos Files and Directories Directory or File Description /etc/krb5.conf The Kerberos service configuration file. /var/lib/sss/pubconf/krb5.include.d/ Includes IdM-specific overrides for Kerberos client configuration. Table C.5. Directory Server Files and Directories Directory or File Description /var/lib/dirsrv/slapd- REALM_NAME / The database associated with the Directory Server instance used by the IdM server. /etc/sysconfig/dirsrv IdM-specific configuration of the dirsrv systemd service. /etc/dirsrv/slapd- REALM_NAME / The configuration and schema files associated with the Directory Server instance used by the IdM server. Table C.6. Certificate System Files and Directories Directory or File Description /etc/pki/pki-tomcat/ca/ The main directory for the IdM CA instance. /var/lib/pki/pki-tomcat/conf/ca/CS.cfg The main configuration file for the IdM CA instance. Table C.7. Cache Files and Directories Directory or File Description ~/.cache/ipa/ Contains a per-server API schema for the IdM client. IdM caches the API schema on the client for one hour. Table C.8. System Backup Files and Directories Directory or File Description /var/lib/ipa/sysrestore/ Contains backups of the system files and scripts that were reconfigured when the IdM server was installed. Includes the original .conf files for NSS, Kerberos (both krb5.conf and kdc.conf ), and NTP. /var/lib/ipa-client/sysrestore/ Contains backups of the system files and scripts that were reconfigured when the IdM client was installed. Commonly, this is the sssd.conf file for SSSD authentication services.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/config-files-logs
Chapter 9. PreprovisioningImage [metal3.io/v1alpha1]
Chapter 9. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PreprovisioningImageSpec defines the desired state of PreprovisioningImage status object PreprovisioningImageStatus defines the observed state of PreprovisioningImage 9.1.1. .spec Description PreprovisioningImageSpec defines the desired state of PreprovisioningImage Type object Property Type Description acceptFormats array (string) acceptFormats is a list of acceptable image formats. architecture string architecture is the processor architecture for which to build the image. networkDataName string networkDataName is the name of a Secret in the local namespace that contains network data to build in to the image. 9.1.2. .status Description PreprovisioningImageStatus defines the observed state of PreprovisioningImage Type object Property Type Description architecture string architecture is the processor architecture for which the image is built conditions array conditions describe the state of the built image conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } extraKernelParams string extraKernelParams is a string with extra parameters to pass to the kernel when booting the image over network. Only makes sense for initrd images. format string format is the type of image that is available at the download url: either iso or initrd. imageUrl string imageUrl is the URL from which the built image can be downloaded. kernelUrl string kernelUrl is the URL from which the kernel of the image can be downloaded. Only makes sense for initrd images. networkData object networkData is a reference to the version of the Secret containing the network data used to build the image. 9.1.3. .status.conditions Description conditions describe the state of the built image Type array 9.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 9.1.5. .status.networkData Description networkData is a reference to the version of the Secret containing the network data used to build the image. Type object Property Type Description name string version string 9.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/preprovisioningimages GET : list objects of kind PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages DELETE : delete collection of PreprovisioningImage GET : list objects of kind PreprovisioningImage POST : create a PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name} DELETE : delete a PreprovisioningImage GET : read the specified PreprovisioningImage PATCH : partially update the specified PreprovisioningImage PUT : replace the specified PreprovisioningImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name}/status GET : read status of the specified PreprovisioningImage PATCH : partially update status of the specified PreprovisioningImage PUT : replace status of the specified PreprovisioningImage 9.2.1. /apis/metal3.io/v1alpha1/preprovisioningimages Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PreprovisioningImage Table 9.2. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImageList schema 401 - Unauthorized Empty 9.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages Table 9.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PreprovisioningImage Table 9.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PreprovisioningImage Table 9.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.8. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImageList schema 401 - Unauthorized Empty HTTP method POST Description create a PreprovisioningImage Table 9.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.10. Body parameters Parameter Type Description body PreprovisioningImage schema Table 9.11. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 202 - Accepted PreprovisioningImage schema 401 - Unauthorized Empty 9.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name} Table 9.12. Global path parameters Parameter Type Description name string name of the PreprovisioningImage namespace string object name and auth scope, such as for teams and projects Table 9.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PreprovisioningImage Table 9.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.15. Body parameters Parameter Type Description body DeleteOptions schema Table 9.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PreprovisioningImage Table 9.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.18. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PreprovisioningImage Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.20. Body parameters Parameter Type Description body Patch schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PreprovisioningImage Table 9.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.23. Body parameters Parameter Type Description body PreprovisioningImage schema Table 9.24. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 401 - Unauthorized Empty 9.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/preprovisioningimages/{name}/status Table 9.25. Global path parameters Parameter Type Description name string name of the PreprovisioningImage namespace string object name and auth scope, such as for teams and projects Table 9.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PreprovisioningImage Table 9.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.28. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PreprovisioningImage Table 9.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.30. Body parameters Parameter Type Description body Patch schema Table 9.31. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PreprovisioningImage Table 9.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.33. Body parameters Parameter Type Description body PreprovisioningImage schema Table 9.34. HTTP responses HTTP code Reponse body 200 - OK PreprovisioningImage schema 201 - Created PreprovisioningImage schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/preprovisioningimage-metal3-io-v1alpha1
Chapter 2. Registering the system and managing subscriptions
Chapter 2. Registering the system and managing subscriptions Subscriptions cover products installed on Red Hat Enterprise Linux, including the operating system itself. If you have not registered the system, you have no access to the RHEL repositories. You cannot install software updates such as security, bug fixes. Even if you have a self-support subscription, it grants access to the knowledge base while more resources remain unavailable in the lack of subscriptions. By purchasing subscriptions and using Red Hat Content Delivery Network (CDN), you can track: Registered systems Products installed on registered systems Subscriptions attached to the installed products 2.1. Registering a system by using the command line Subscriptions cover products installed on Red Hat Enterprise Linux, including the operating system itself. If you have not registered the system, you have no access to the RHEL repositories. You cannot install software updates such as security, bug fixes. Even if you have a self-support subscription, it grants access to the knowledge base while more resources remain unavailable in the lack of subscriptions. You need register the system to activate and manage Red Hat Enterprise Linux subscription for your Red Hat account. Note To register the system with Red Hat Insights, you can use the rhc connect utility. For details, see Setting up remote host configuration . Prerequisites You have an active subscription of the Red Hat Enterprise Linux system. Procedure Register and subscribe the system: The command prompts you to enter username and password of Red Hat Customer Portal account. If the registration process fails, you can register the system with a specific pool. For details, proceed with the following steps: Determine the pool ID of a subscription: This command displays all available subscriptions for your Red Hat account. For every subscription, various characteristics are displayed, including the pool ID. Attach the appropriate subscription to your system by replacing <example_pool_id> with the pool ID determined in the step: Verification Verify the system under Inventory Systems in the Hybrid Cloud Console. Additional resources Understanding Red Hat Subscription Management Understanding your workflow for subscribing with Red Hat products Viewing your subscription inventory in the Hybrid Cloud Console 2.2. Registering a system by using the web console Subscriptions cover products installed on Red Hat Enterprise Linux, including the operating system itself. If you have not registered the system, you have no access to the RHEL repositories. You cannot install software updates such as security, bug fixes. Even if you have a self-support subscription, it grants access to the knowledge base while more resources remain unavailable in the lack of subscriptions. You can register a newly installed Red Hat Enterprise Linux with account credentials in the Red Hat web console. Prerequisites You have an active subscription of the RHEL system. You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Open https://<ip_address_or_hostname>:9090 in a browser, and log in to the web console. In the Health field on the Overview page, click the Not registered warning, or click Subscriptions in the main menu to move to page with your subscription information. In the Overview field, click Register . In the Register system dialog, select the registration method. Optional: Enter your organization's name or ID. If your account belongs to more than one organization on the Red Hat Customer Portal, you must add the organization name or ID. To get the organization ID, check with your Technical Account Manager at Red Hat. If you do not want to connect your system to Red Hat Insights, clear the Insights checkbox. Click Register . Verification Check details of your subscription in the Hybrid Cloud Console . 2.3. Registering a system in the GNOME desktop environment Subscriptions cover products installed on Red Hat Enterprise Linux, including the operating system itself. If you have not registered the system, you have no access to the RHEL repositories. You can not install software updates such as security, bug fixes. Even if you have a self-support subscription, it grants access to the knowledge base while more resources remain unavailable in the lack of subscriptions. Follow the steps in this procedure to enroll the system with your Red Hat account. Prerequisites You have created a Red Hat account . You are a root user and logged in to the GNOME desktop environment. For details, see Register and subscribe RHEL system to Red Hat Subscription Manager . Procedure Open the system menu , which is accessible from the upper-right screen corner, and click Settings . Go to About Subscription . If you want to register the System through Red Hat Satellite: In the Registration Server section, select Custom Address . Enter the server address in the URL field. In the Registration Type section, select your preferred registration method. Fill the Registration Details section. Click Register .
[ "subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: <example_username> Password: <example_password> The system has been registered with ID: 37to907c-ece6-49ea-9174-20b87ajk9ee7 The registered system name is: client1.example.com", "subscription-manager list --available --all", "subscription-manager attach --pool= <example_pool_id>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/assembly_registering-the-system-and-managing-subscriptions_configuring-basic-system-settings
Chapter 6. Troubleshooting common problems with distributed workloads for users
Chapter 6. Troubleshooting common problems with distributed workloads for users If you are experiencing errors in Red Hat OpenShift AI relating to distributed workloads, read this section to understand what could be causing the problem, and how to resolve the problem. If the problem is not documented here or in the release notes, contact Red Hat Support. 6.1. My Ray cluster is in a suspended state Problem The resource quota specified in the cluster queue configuration might be insufficient, or the resource flavor might not yet be created. Diagnosis The Ray cluster head pod or worker pods remain in a suspended state. Resolution In the OpenShift console, select your project from the Project list. Check the workload resource: Click Search , and from the Resources list, select Workload . Select the workload resource that is created with the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field, which provides the reason for the suspended state, as shown in the following example: status: conditions: - lastTransitionTime: '2024-05-29T13:05:09Z' message: 'couldn''t assign flavors to pod set small-group-jobtest12: insufficient quota for nvidia.com/gpu in flavor default-flavor in ClusterQueue' Check the Ray cluster resource: Click Search , and from the Resources list, select RayCluster . Select the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field. Check the cluster queue resource: Click Search , and from the Resources list, select ClusterQueue . Check your cluster queue configuration to ensure that the resources that you requested are within the limits defined for the project. Either reduce your requested resources, or contact your administrator to request more resources. 6.2. My Ray cluster is in a failed state Problem You might have insufficient resources. Diagnosis The Ray cluster head pod or worker pods are not running. When a Ray cluster is created, it initially enters a failed state. This failed state usually resolves after the reconciliation process completes and the Ray cluster pods are running. Resolution If the failed state persists, complete the following steps: In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select Pod . Click your pod name to open the pod details page. Click the Events tab, and review the pod events to identify the cause of the problem. If you cannot resolve the problem, contact your administrator to request assistance. 6.3. I see a failed to call webhook error message for the CodeFlare Operator Problem After you run the cluster.up() command, the following error is shown: ApiException: (500) Reason: Internal Server Error HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"mraycluster.ray.openshift.ai\": failed to call webhook: Post \"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"codeflare-operator-webhook-service\"","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"mraycluster.ray.openshift.ai\": failed to call webhook: Post \"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"codeflare-operator-webhook-service\""}]},"code":500} Diagnosis The CodeFlare Operator pod might not be running. Resolution Contact your administrator to request assistance. 6.4. I see a failed to call webhook error message for Kueue Problem After you run the cluster.up() command, the following error is shown: ApiException: (500) Reason: Internal Server Error HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"mraycluster.kb.io\": failed to call webhook: Post \"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"kueue-webhook-service\"","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"mraycluster.kb.io\": failed to call webhook: Post \"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"kueue-webhook-service\""}]},"code":500} Diagnosis The Kueue pod might not be running. Resolution Contact your administrator to request assistance. 6.5. My Ray cluster doesn't start Problem After you run the cluster.up() command, when you run either the cluster.details() command or the cluster.status() command, the Ray Cluster remains in the Starting status instead of changing to the Ready status. No pods are created. Diagnosis In the OpenShift console, select your project from the Project list. Check the workload resource: Click Search , and from the Resources list, select Workload . Select the workload resource that is created with the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field, which provides the reason for remaining in the Starting state. Check the Ray cluster resource: Click Search , and from the Resources list, select RayCluster . Select the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field. Resolution If you cannot resolve the problem, contact your administrator to request assistance. 6.6. I see a Default Local Queue ... not found error message Problem After you run the cluster.up() command, the following error is shown: Default Local Queue with kueue.x-k8s.io/default-queue: true annotation not found please create a default Local Queue or provide the local_queue name in Cluster Configuration. Diagnosis No default local queue is defined, and a local queue is not specified in the cluster configuration. Resolution In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select LocalQueue . Resolve the problem in one of the following ways: If a local queue exists, add it to your cluster configuration as follows: local_queue=" <local_queue_name> " If no local queue exists, contact your administrator to request assistance. 6.7. I see a local_queue provided does not exist error message Problem After you run the cluster.up() command, the following error is shown: local_queue provided does not exist or is not in this namespace. Please provide the correct local_queue name in Cluster Configuration. Diagnosis An incorrect value is specified for the local queue in the cluster configuration, or an incorrect default local queue is defined. The specified local queue either does not exist, or exists in a different namespace. Resolution In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select LocalQueue . Resolve the problem in one of the following ways: If a local queue exists, ensure that you spelled the local queue name correctly in your cluster configuration, and that the namespace value in the cluster configuration matches your project name. If you do not specify a namespace value in the cluster configuration, the Ray cluster is created in the current project. If no local queue exists, contact your administrator to request assistance. 6.8. I cannot create a Ray cluster or submit jobs Problem After you run the cluster.up() command, an error similar to the following error is shown: RuntimeError: Failed to get RayCluster CustomResourceDefinition: (403) Reason: Forbidden HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rayclusters.ray.io is forbidden: User \"system:serviceaccount:regularuser-project:regularuser-workbench\" cannot list resource \"rayclusters\" in API group \"ray.io\" in the namespace \"regularuser-project\"","reason":"Forbidden","details":{"group":"ray.io","kind":"rayclusters"},"code":403} Diagnosis The correct OpenShift login credentials are not specified in the TokenAuthentication section of your notebook code. Resolution Identify the correct OpenShift login credentials as follows: In the OpenShift console header, click your username and click Copy login command . In the new tab that opens, log in as the user whose credentials you want to use. Click Display Token . From the Log in with this token section, copy the token and server values. In your notebook code, specify the copied token and server values as follows: auth = TokenAuthentication( token = " <token> ", server = " <server> ", skip_tls=False ) auth.login() 6.9. My pod provisioned by Kueue is terminated before my image is pulled Problem Kueue waits for a period of time before marking a workload as ready, to enable all of the workload pods to become provisioned and running. By default, Kueue waits for 5 minutes. If the pod image is very large and is still being pulled after the 5-minute waiting period elapses, Kueue fails the workload and terminates the related pods. Diagnosis In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select Pod . Click the Ray head pod name to open the pod details page. Click the Events tab, and review the pod events to check whether the image pull completed successfully. Resolution If the pod takes more than 5 minutes to pull the image, contact your administrator to request assistance.
[ "status: conditions: - lastTransitionTime: '2024-05-29T13:05:09Z' message: 'couldn''t assign flavors to pod set small-group-jobtest12: insufficient quota for nvidia.com/gpu in flavor default-flavor in ClusterQueue'", "ApiException: (500) Reason: Internal Server Error HTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Internal error occurred: failed calling webhook \\\"mraycluster.ray.openshift.ai\\\": failed to call webhook: Post \\\"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"codeflare-operator-webhook-service\\\"\",\"reason\":\"InternalError\",\"details\":{\"causes\":[{\"message\":\"failed calling webhook \\\"mraycluster.ray.openshift.ai\\\": failed to call webhook: Post \\\"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"codeflare-operator-webhook-service\\\"\"}]},\"code\":500}", "ApiException: (500) Reason: Internal Server Error HTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Internal error occurred: failed calling webhook \\\"mraycluster.kb.io\\\": failed to call webhook: Post \\\"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"kueue-webhook-service\\\"\",\"reason\":\"InternalError\",\"details\":{\"causes\":[{\"message\":\"failed calling webhook \\\"mraycluster.kb.io\\\": failed to call webhook: Post \\\"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"kueue-webhook-service\\\"\"}]},\"code\":500}", "Default Local Queue with kueue.x-k8s.io/default-queue: true annotation not found please create a default Local Queue or provide the local_queue name in Cluster Configuration.", "local_queue=\" <local_queue_name> \"", "local_queue provided does not exist or is not in this namespace. Please provide the correct local_queue name in Cluster Configuration.", "RuntimeError: Failed to get RayCluster CustomResourceDefinition: (403) Reason: Forbidden HTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"rayclusters.ray.io is forbidden: User \\\"system:serviceaccount:regularuser-project:regularuser-workbench\\\" cannot list resource \\\"rayclusters\\\" in API group \\\"ray.io\\\" in the namespace \\\"regularuser-project\\\"\",\"reason\":\"Forbidden\",\"details\":{\"group\":\"ray.io\",\"kind\":\"rayclusters\"},\"code\":403}", "auth = TokenAuthentication( token = \" <token> \", server = \" <server> \", skip_tls=False ) auth.login()" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_distributed_workloads/troubleshooting-common-problems-with-distributed-workloads-for-users_distributed-workloads
4.5. Creating a system image with Image Builder in the web console interface
4.5. Creating a system image with Image Builder in the web console interface The following steps below describe creating a system image. Prerequisites You have opened the Image Builder interface of the RHEL 7 web console in a browser. A blueprint exists. Procedure 1. Locate the blueprint that you want to build an image by entering its name or a part of it into the search box at top left, and press Enter . The search is added to the list of filters under the text entry field, and the list of blueprints below is reduced to these that match the search. If the list of blueprints is too long, add further search terms in the same way. 2. On the right side of the blueprint, press the Create Image button that belongs to the blueprint. A pop-up window appears. 3. Select the image type and architecture and press Create. A small pop-up in the top right informs you that the image creation has been added to the queue. 4. Click the name of the blueprint. A screen with details of the blueprint opens. 5. Click the Images tab to switch to it. The image that is being created is listed with the status In Progress. Important Image creation takes a longer time, measured in minutes. There is no indication of progress while the image is created. To abort image creation, press its Stop button on the right. 6. Once the image is successfully created, the Stop button is replaced by a Download button. Click this button to download the image to your system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_5
Chapter 5. Deploying Ceph
Chapter 5. Deploying Ceph Once the pre-requisites and initial tuning are complete, consider deploying a Ceph cluster. When deploying a production cluster, Red Hat recommends setting up the initial monitor cluster and enough OSD nodes to reach an active + clean state. For details, see the Installing a Red Hat Ceph Storage Cluster section in the Red Hat Ceph Storage 4 Installation Guide. Then, install the Ceph CLI client on an administration node. For details, see the Installing the Ceph Command Line Interface section in the Red Hat Ceph Storage 4 Installation Guide. Once the initial cluster is running, consider adding the settings in the following sections to the Ceph configuration file.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_for_production_guide/deploying-ceph-rgw-prod
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC In OpenShift Container Platform version 4.14, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 5.2. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster using an existing IBM(R) Virtual Private Cloud (VPC). Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 5.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 5.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 5.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 5.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 9 vpcSubnets: 10 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceID: "powervs-region-service-instance-id" credentialsMode: Manual publish: External 11 pullSecret: '{"auths": ...}' 12 fips: false sshKey: ssh-ed25519 AAAA... 13 1 4 If you do not provide these parameters and values, the installation program provides the default value. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 7 The machine CIDR must contain the subnets for the compute machines and control plane machines. 8 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 9 Specify the name of an existing VPC. 10 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 11 How to publish the user-facing endpoints of your cluster. 12 Required. The installation program prompts you for this value. 13 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.13. steps Customize your cluster Optional: Opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 9 vpcSubnets: 10 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" credentialsMode: Manual publish: External 11 pullSecret: '{\"auths\": ...}' 12 fips: false sshKey: ssh-ed25519 AAAA... 13", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-powervs-vpc
29.3. TCPGOSSIP Configuration Options
29.3. TCPGOSSIP Configuration Options The following TCPGOSSIP specific properties may be configured: initial_hosts - Comma delimited list of hosts to be contacted for initial membership. reconnect_interval - Interval (in milliseconds) by which a disconnected node attempts to reconnect to the Gossip Router. sock_conn_timeout - Max time (in milliseconds) allowed for socket creation. Defaults to 1000 . sock_read_timeout - Max time (in milliseconds) to block on a read. A value of 0 will block forever. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/TCPGOSSIP_Configuration_Options
Chapter 8. Handling large messages
Chapter 8. Handling large messages Clients might send large messages that can exceed the size of the broker's internal buffer, causing unexpected errors. To prevent this situation, you can configure the broker to store messages as files when the messages are larger than a specified minimum value. Handling large messages in this way means that the broker does not hold the messages in memory. Instead, you specify a directory on disk or in a database table in which the broker stores large message files. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory or database table. Large message handling is available for the Core Protocol, AMQP, OpenWire and STOMP protocols. For the Core Protocol and OpenWire protocols, clients specify the minimum large message size in their connection configurations. For the AMQP and STOMP protocols, you specify the minimum large message size in the acceptor defined for each protocol in the broker configuration. Note It is recommended that you do not use different protocols for producing and consuming large messages. To do this, the broker might need to perform several conversions of the message. For example, say that you want to send a message using the AMQP protocol and receive it using OpenWire. In this situation, the broker must first read the entire body of the large message and convert it to use the Core protocol. Then, the broker must perform another conversion, this time to the OpenWire protocol. Message conversions such as these cause significant processing overhead on the broker. The minimum large message size that you specify for any of the preceding protocols is affected by system resources such as the amount of disk space available, as well as the sizes of the messages. It is recommended that you run performance tests using several values to determine an appropriate size. The procedures in this section show how to: Configure the broker to store large messages Configure acceptors for the AMQP and STOMP protocols for large message handling This section also links to additional resources about configuring AMQ Core Protocol and AMQ OpenWire JMS clients to work with large messages. 8.1. Configuring the broker for large message handling The following procedure shows how to specify a directory on disk or a database table in which the broker stores large message files. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Specify where you want the broker to store large message files. If you are storing large messages on disk, add the large-messages-directory parameter within the core element and specify a file system location. For example: <configuration> <core> ... <large-messages-directory>/path/to/my-large-messages-directory</large-messages-directory> ... </core> </configuration> Note If you do not explicitly specify a value for large-messages-directory , the broker uses a default value of <broker_instance_dir> /data/largemessages If you are storing large messages in a database table, add the large-message-table parameter to the database-store element and specify a value. For example: <store> <database-store> ... <large-message-table>MY_TABLE</large-message-table> ... </database-store> </store> Note If you do not explicitly specify a value for large-message-table , the broker uses a default value of LARGE_MESSAGE_TABLE . Additional resources For more information about configuring a database store, see Section 6.2, "Persisting message data in a database" . 8.2. Configuring AMQP acceptors for large message handling The following procedure shows how to configure an AMQP acceptor to handle an AMQP message larger than a specified size as a large message. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. The default AMQP acceptor in the broker configuration file looks as follows: <acceptors> ... <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> ... </acceptors> In the default AMQP acceptor (or another AMQP acceptor that you have configured), add the amqpMinLargeMessageSize property and specify a value. For example: <acceptors> ... <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=204800</acceptor> ... </acceptors> In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize , if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. If you do not explicitly specify a value for this property, the broker uses a default value of 102400 (that is, 100 kilobytes). Note If you set amqpMinLargeMessageSize to -1, large message handling for AMQP messages is disabled. If the broker receives a persistent AMQP message that does not exceed the value of amqpMinLargeMessageSize , but which does exceed the size of the messaging journal buffer (specified using the journal-buffer-size configuration parameter), the broker converts the message to a large Core Protocol message, before storing it in the journal. 8.3. Configuring STOMP acceptors for large message handling The following procedure shows how to configure a STOMP acceptor to handle a STOMP message larger than a specified size as a large message. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. The default AMQP acceptor in the broker configuration file looks as follows: <acceptors> ... <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> ... </acceptors> In the default STOMP acceptor (or another STOMP acceptor that you have configured), add the stompMinLargeMessageSize property and specify a value. For example: <acceptors> ... <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=204800</acceptor> ... </acceptors> In the preceding example, the broker is configured to accept STOMP messages on port 61613. Based on the value of stompMinLargeMessageSize , if the acceptor receives a STOMP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. If you do not explicitly specify a value for this property, the broker uses a default value of 102400 (that is, 100 kilobytes). Note To deliver a large message to a STOMP consumer, the broker automatically converts the message from a large message to a normal message before sending it to the client. If a large message is compressed, the broker decompresses it before sending it to STOMP clients. 8.4. Large messages and Java clients There are two options available to Java developers who are writing clients that use large messages. One option is to use instances of InputStream and OutputStream . For example, a FileInputStream can be used to send a message taken from a large file on a physical disk. A FileOutputStream can then be used by the receiver to stream the message to a location on its local file system. Another option is to stream a JMS BytesMessage or StreamMessage directly. For example: BytesMessage rm = (BytesMessage)cons.receive(10000); byte data[] = new byte[1024]; for (int i = 0; i < rm.getBodyLength(); i += 1024) { int numberOfBytes = rm.readBytes(data); // Do whatever you want with the data } Additional resources To learn about working with large messages in the AMQ Core Protocol JMS client, see: Large message options Writing to a streamed large message Reading from a streamed large message To learn about working with large messages in the AMQ OpenWire JMS client, see: Large message options Writing to a streamed large message Reading from a streamed large message For an example of working with large messages, see the large-message example in the <install_dir> /examples/features/standard/ directory of your AMQ Broker installation. To learn more about running example programs, see Running an AMQ Broker example program .
[ "<configuration> <core> <large-messages-directory>/path/to/my-large-messages-directory</large-messages-directory> </core> </configuration>", "<store> <database-store> <large-message-table>MY_TABLE</large-message-table> </database-store> </store>", "<acceptors> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> </acceptors>", "<acceptors> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=204800</acceptor> </acceptors>", "<acceptors> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> </acceptors>", "<acceptors> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=204800</acceptor> </acceptors>", "BytesMessage rm = (BytesMessage)cons.receive(10000); byte data[] = new byte[1024]; for (int i = 0; i < rm.getBodyLength(); i += 1024) { int numberOfBytes = rm.readBytes(data); // Do whatever you want with the data }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/assembly-br-handling-large-messages_configuring
Chapter 15. Idling applications
Chapter 15. Idling applications Cluster administrators can idle applications to reduce resource consumption. This is useful when the cluster is deployed on a public cloud where cost is related to resource consumption. If any scalable resources are not in use, OpenShift Container Platform discovers and idles them by scaling their replicas to 0 . The time network traffic is directed to the resources, the resources are unidled by scaling up the replicas, and normal operation continues. Applications are made of services, as well as other scalable resources, such as deployment configs. The action of idling an application involves idling all associated resources. 15.1. Idling applications Idling an application involves finding the scalable resources (deployment configurations, replication controllers, and others) associated with a service. Idling an application finds the service and marks it as idled, scaling down the resources to zero replicas. You can use the oc idle command to idle a single service, or use the --resource-names-file option to idle multiple services. 15.1.1. Idling a single service Procedure To idle a single service, run: USD oc idle <service> 15.1.2. Idling multiple services Idling multiple services is helpful if an application spans across a set of services within a project, or when idling multiple services in conjunction with a script to idle multiple applications in bulk within the same project. Procedure Create a file containing a list of the services, each on their own line. Idle the services using the --resource-names-file option: USD oc idle --resource-names-file <filename> Note The idle command is limited to a single project. For idling applications across a cluster, run the idle command for each project individually. 15.2. Unidling applications Application services become active again when they receive network traffic and are scaled back up their state. This includes both traffic to the services and traffic passing through routes. Applications can also be manually unidled by scaling up the resources. Procedure To scale up a DeploymentConfig, run: USD oc scale --replicas=1 dc <dc_name> Note Automatic unidling by a router is currently only supported by the default HAProxy router.
[ "oc idle <service>", "oc idle --resource-names-file <filename>", "oc scale --replicas=1 dc <dc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/idling-applications
Appendix A. Using your subscription
Appendix A. Using your subscription Debezium is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing your account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading zip and tar files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Scroll down to INTEGRATION AND AUTOMATION . Click Red Hat Integration to display the Red Hat Integration downloads page. Click the Download link for your component. Revised on 2023-11-17 04:10:54 UTC
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_debezium_on_openshift/using_your_subscription
Chapter 11. Updating a cluster that includes RHEL compute machines
Chapter 11. Updating a cluster that includes RHEL compute machines You can update, or upgrade, an OpenShift Container Platform cluster. If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must perform more steps to update those machines. 11.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state . Support for RHEL7 workers is removed in OpenShift Container Platform 4.10. You must replace RHEL7 workers with RHEL8 or RHCOS workers before upgrading to OpenShift Container Platform 4.10. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Additional resources Support policy for unmanaged Operators 11.2. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with admin privileges. Pause all MachineHealthCheck resources. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.10 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are upgrading your cluster to the minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are upgraded before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Note When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. 11.3. Optional: Adding hooks to perform Ansible tasks on RHEL machines You can use hooks to run Ansible tasks on the RHEL compute machines during the OpenShift Container Platform update. 11.3.1. About Ansible hooks for upgrades When you update OpenShift Container Platform, you can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using hooks . Hooks allow you to provide files that define tasks to run before or after specific update tasks. You can use hooks to validate or modify custom infrastructure when you update the RHEL compute nodes in you OpenShift Container Platform cluster. Because when a hook fails, the operation fails, you must design hooks that are idempotent, or can run multiple times and provide the same results. Hooks have the following important limitations: - Hooks do not have a defined or versioned interface. They can use internal openshift-ansible variables, but it is possible that the variables will be modified or removed in future OpenShift Container Platform releases. - Hooks do not have error handling, so an error in a hook halts the update process. If you get an error, you must address the problem and then start the upgrade again. 11.3.2. Configuring the Ansible inventory file to use hooks You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, in the hosts inventory file under the all:vars section. Prerequisites You have access to the machine that you used to add the RHEL compute machines cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines. Procedure After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must be a set of tasks and cannot be a playbook, as shown in the following example: --- # Trivial example forcing an operator to acknowledge the start of an upgrade # file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start" - name: require the user agree to start an upgrade pause: prompt: "Press Enter to start the compute machine update" Modify the hosts Ansible inventory file to specify the hook files. The hook files are specified as parameter values in the [all:vars] section, as shown: Example hook definitions in an inventory file To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in their definitions. 11.3.3. Available hooks for RHEL compute machines You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute machines in your OpenShift Container Platform cluster. Hook name Description openshift_node_pre_cordon_hook Runs before each node is cordoned. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_upgrade_hook Runs after each node is cordoned but before it is updated. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_uncordon_hook Runs after each node is updated but before it is uncordoned. This hook runs against each node in serial. If a task must run against a different host, they task must use delegate_to or local_action . openshift_node_post_upgrade_hook Runs after each node uncordoned. It is the last node update action. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . 11.4. Updating RHEL compute machines in your cluster After you update your cluster, you must update the Red Hat Enterprise Linux (RHEL) compute machines in your cluster. Important Red Hat Enterprise Linux (RHEL) versions 8.5-8.8 are supported for RHEL compute machines. You can also update your compute machines to another minor version of OpenShift Container Platform if you are using RHEL as the operating system. You do not need to exclude any RPM packages from RHEL when performing a minor version update. Important You cannot upgrade RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. Prerequisites You updated your cluster. Important Because the RHEL machines require assets that are generated by the cluster to complete the update process, you must update the cluster before you update the RHEL worker machines in it. You have access to the local machine that you used to add the RHEL compute machines to your cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines and the upgrade playbook. For updates to a minor version, the RPM repository is using the same version of OpenShift Container Platform that is running on your cluster. Procedure Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note By default, the base OS RHEL with "Minimal" installation option enables firewalld service. Having the firewalld service enabled on your host prevents you from accessing OpenShift Container Platform logs on the worker. Do not enable firewalld later if you wish to continue accessing OpenShift Container Platform logs on the worker. Enable the repositories that are required for OpenShift Container Platform 4.10: On the machine that you run the Ansible playbooks, update the required repositories: # subscription-manager repos --disable=rhel-7-server-ose-4.9-rpms \ --enable=rhel-7-server-ose-4.10-rpms Important As of OpenShift Container Platform 4.10.23, running the Ansible playbooks on RHEL 7 is deprecated, and suggested only for the purpose of updating existing installations. After completing this procedure, you must either upgrade the Ansible host to RHEL 8, or create a new Ansible host on a RHEL 8 system and copy over the inventories from the old Ansible host. Starting with OpenShift Container Platform 4.11, the Ansible playbooks are provided only for RHEL 8. On the machine that you run the Ansible playbooks, update the required packages, including openshift-ansible : # yum update openshift-ansible openshift-clients On each RHEL compute node, update the required repositories: # subscription-manager repos --disable=rhocp-4.9-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.10-for-rhel-8-x86_64-rpms Update a RHEL worker machine: Review your Ansible inventory file at /<path>/inventory/hosts and update its contents so that the RHEL 8 machines are listed in the [workers] section, as shown in the following example: Change to the openshift-ansible directory: USD cd /usr/share/ansible/openshift-ansible Run the upgrade playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. Note The upgrade playbook only upgrades the OpenShift Container Platform packages. It does not update the operating system packages. After you update all of the workers, confirm that all of your cluster nodes have updated to the new version: # oc get node Example output NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.23.0 mycluster-control-plane-1 Ready master 145m v1.23.0 mycluster-control-plane-2 Ready master 145m v1.23.0 mycluster-rhel8-0 Ready worker 98m v1.23.0 mycluster-rhel8-1 Ready worker 98m v1.23.0 mycluster-rhel8-2 Ready worker 98m v1.23.0 mycluster-rhel8-3 Ready worker 98m v1.23.0 Optional: Update the operating system packages that were not updated by the upgrade playbook. To update packages that are not on 4.10, use the following command: # yum update Note You do not need to exclude RPM packages if you are using the same RPM repository that you used when you installed 4.10.
[ "--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"", "[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml", "systemctl disable --now firewalld.service", "subscription-manager repos --disable=rhel-7-server-ose-4.9-rpms --enable=rhel-7-server-ose-4.10-rpms", "yum update openshift-ansible openshift-clients", "subscription-manager repos --disable=rhocp-4.9-for-rhel-8-x86_64-rpms --enable=rhocp-4.10-for-rhel-8-x86_64-rpms", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1", "oc get node", "NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.23.0 mycluster-control-plane-1 Ready master 145m v1.23.0 mycluster-control-plane-2 Ready master 145m v1.23.0 mycluster-rhel8-0 Ready worker 98m v1.23.0 mycluster-rhel8-1 Ready worker 98m v1.23.0 mycluster-rhel8-2 Ready worker 98m v1.23.0 mycluster-rhel8-3 Ready worker 98m v1.23.0", "yum update" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/updating-cluster-rhel-compute
Chapter 1. Getting started with Data Grid Server
Chapter 1. Getting started with Data Grid Server Install the server distribution, create a user, and start your first Data Grid cluster. Ansible collection Automate installation of Data Grid clusters with our Ansible collection that optionally includes Keycloak caches and cross-site replication configuration. The Ansible collection also lets you inject Data Grid caches into the static configuration for each server instance during installation. The Ansible collection for Data Grid is available from the Red Hat Automation Hub . 1.1. Data Grid Server requirements Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions. 1.2. Downloading Data Grid Server distributions The Data Grid Server distribution is an archive of Java libraries ( JAR files) and configuration files. Procedure Access the Red Hat customer portal. Download Red Hat Data Grid 8.4 Server from the software downloads section . Run the md5sum or sha256sum command with the server download archive as the argument, for example: Compare with the MD5 or SHA-256 checksum value on the Data Grid Software Details page. Reference Data Grid Server README describes the contents of the server distribution. 1.3. Installing Data Grid Server Install the Data Grid Server distribution on a host system. Prerequisites Download a Data Grid Server distribution archive. Procedure Use any appropriate tool to extract the Data Grid Server archive to the host filesystem. The resulting directory is your USDRHDG_HOME . 1.4. Starting Data Grid Server Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host. Prerequisites Download and install the server distribution. Procedure Open a terminal in USDRHDG_HOME . Start Data Grid Server instances with the server script. Linux Microsoft Windows Data Grid Server is running successfully when it logs the following messages: Verification Open 127.0.0.1:11222/console/ in any browser. Enter your credentials at the prompt and continue to Data Grid Console. 1.5. Passing Data Grid Server configuration at startup Specify custom configuration when you start Data Grid Server. Data Grid Server can parse multiple configuration files that you overlay on startup with the --server-config argument. You can use as many configuration overlay files as required, in any order. Configuration overlay files: Must be valid Data Grid configuration and contain the root server element or field. Do not need to be full configuration as long as your combination of overlay files results in a full configuration. Important Data Grid Server does not detect conflicting configuration between overlay files. Each overlay file overwrites any conflicting configuration in the preceding configuration. Note If you pass cache configuration to Data Grid Server on startup it does not dynamically create those cache across the cluster. You must manually propagate caches to each node. Additionally, cache configuration that you pass to Data Grid Server on startup must include the infinispan and cache-container elements. Prerequisites Download and install the server distribution. Add custom server configuration to the server/conf directory of your Data Grid Server installation. Procedure Open a terminal in USDRHDG_HOME . Specify one or more configuration files with the --server-config= or -c argument, for example: 1.6. Creating Data Grid users Add credentials to authenticate with Data Grid Server deployments through Hot Rod and REST endpoints. Before you can access the Data Grid Console or perform cache operations you must create at least one user with the Data Grid command line interface (CLI). Tip Data Grid enforces security authorization with role-based access control (RBAC). Create an admin user the first time you add credentials to gain full ADMIN permissions to your Data Grid deployment. Prerequisites Download and install Data Grid Server. Procedure Open a terminal in USDRHDG_HOME . Create an admin user with the user create command. bin/cli.sh user create admin -p changeme Tip Run help user from a CLI session to get complete command details. Verification Open user.properties and confirm the user exists. Note Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster. 1.6.1. Granting roles to users Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources. Tip Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Assign roles to users with the user roles grant command, for example: Verification List roles that you grant to users with the user roles ls command. 1.6.2. Adding users to groups Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role. Note You use groups as part of a property realm in the Data Grid Server configuration. Each group is a special type of user that also requires a username and password. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Use the user create command to create a group. Specify a group name with the --groups argument. Set a username and password for the group. List groups. Grant a role to the group. List roles for the group. Add users to the group one at a time. Verification Open groups.properties and confirm the group exists. 1.6.3. Data Grid user roles and permissions Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources. Role Permissions Description admin ALL Superuser with all permissions including control of the Cache Manager lifecycle. deployer ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE Can create and delete Data Grid resources in addition to application permissions. application ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts. observer ALL_READ, MONITOR Has read access to Data Grid resources in addition to monitor permissions. monitor MONITOR Can view statistics via JMX and the metrics endpoint. Additional resources org.infinispan.security.AuthorizationPermission Enum Data Grid configuration schema reference 1.7. Verifying cluster views Data Grid Server instances on the same network automatically discover each other and form clusters. Complete this procedure to observe cluster discovery with the MPING protocol in the default TCP stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters. Note This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production. Prerequisites Have one instance of Data Grid Server running. Procedure Open a terminal in USDRHDG_HOME . Copy the root directory to server2 . Specify a port offset and the server2 directory. Verification You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership . Data Grid also logs the following messages when nodes join clusters: 1.8. Shutting down Data Grid Server Stop individually running servers or bring down clusters gracefully. Procedure Create a CLI connection to Data Grid. Shut down Data Grid Server in one of the following ways: Stop all nodes in a cluster with the shutdown cluster command, for example: This command saves cluster state to the data folder for each node in the cluster. If you use a cache store, the shutdown cluster command also persists all data in the cache. Stop individual server instances with the shutdown server command and the server hostname, for example: Important The shutdown server command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time. Tip Run help shutdown for more details about using the command. Verification Data Grid logs the following messages when you shut down servers: 1.8.1. Shutdown and restart of Data Grid clusters Prevent data loss and ensure consistency of your cluster by properly shutting down and restarting nodes. Cluster shutdown Data Grid recommends using the shutdown cluster command to stop all nodes in a cluster while saving cluster state and persisting all data in the cache. You can use the shutdown cluster command also for clusters with a single node. When you bring Data Grid clusters back online, all nodes and caches in the cluster will be unavailable until all nodes rejoin. To prevent inconsistencies or data loss, Data Grid restricts access to the data stored in the cluster and modifications of the cluster state until the cluster is fully operational again. Additionally, Data Grid disables cluster rebalancing and prevents local cache stores purging on startup. During the cluster recovery process, the coordinator node logs messages for each new node joining, indicating which nodes are available and which are still missing. Other nodes in the Data Grid cluster have the view from the time they join. You can monitor availability of caches using the Data Grid Console or REST API. However, in cases where waiting for all nodes is not necessary nor desired, it is possible to set a cache available with the current topology. This approach is possible through the CLI, see below, or the REST API. Important Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated. Server shutdown After using the shutdown server command to bring nodes down, the first node to come back online will be available immediately without waiting for other members. The remaining nodes join the cluster immediately, triggering state transfer but loading the local persistence first, which might lead to stale entries. Local cache stores configured to purge on startup will be emptied when the server starts. Local cache stores marked as purge=false will be available after a server restarts but might contain stale entries. If you shutdown clustered nodes with the shutdown server command, you must restart each server in reverse order to avoid potential issues related to data loss and stale entries in the cache. For example, if you shutdown server1 and then shutdown server2 , you should first start server2 and then start server1 . However, restarting clustered nodes in reverse order does not completely prevent data loss and stale entries. 1.9. Data Grid Server installation directory structure Data Grid Server uses the following folders on the host filesystem under USDRHDG_HOME : Tip See the Data Grid Server README for descriptions of the each folder in your USDRHDG_HOME directory as well as system properties you can use to customize the filesystem. 1.9.1. Server root directory Apart from resources in the bin and docs folders, the only folder under USDRHDG_HOME that you should interact with is the server root directory, which is named server by default. You can create multiple nodes under the same USDRHDG_HOME directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem: Each server root directory should contain the following folders: server/conf Holds infinispan.xml configuration files for a Data Grid Server instance. Data Grid separates configuration into two layers: Dynamic Create mutable cache configurations for data scalability. Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur. Static Add configuration to infinispan.xml for underlying server mechanisms such as cluster transport, security, and shared datasources. server/data Provides internal storage that Data Grid Server uses to maintain cluster state. Important Never directly delete or modify content in server/data . Modifying files such as caches.xml while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown. server/lib Contains extension JAR files for custom filters, custom event listeners, JDBC drivers, custom ServerTask implementations, and so on. server/log Holds Data Grid Server log files. Additional resources Data Grid Server README What is stored in the <server>/data directory used by a RHDG server (Red Hat Knowledgebase)
[ "sha256sum jboss-datagrid-USD{version}-server.zip", "unzip redhat-datagrid-8.4.6-server.zip", "bin/server.sh", "bin\\server.bat", "ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222 ISPN080034: Server '...' listening on http://127.0.0.1:11222 ISPN080001: Data Grid Server <version> started in <mm>ms", "bin/server.sh -c infinispan.xml -c datasources.yaml -c security-realms.json", "bin/cli.sh user create admin -p changeme", "cat server/conf/users.properties admin=scram-sha-1\\:BYGcIAwvf6b", "user roles grant --roles=deployer katie", "user roles ls katie [\"deployer\"]", "user create --groups=developers developers -p changeme", "user ls --groups", "user roles grant --roles=application developers", "user roles ls developers", "user groups john --groups=developers", "cat server/conf/groups.properties", "cp -r server server2", "bin/server.sh -o 100 -s server2", "INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>) ISPN000094: Received new cluster view for channel cluster: [<server_hostname>|3] (2) [<server_hostname>, <server2_hostname>] INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>) ISPN100000: Node <server2_hostname> joined the cluster", "shutdown cluster", "shutdown server <my_server01>", "ISPN080002: Data Grid Server stopping ISPN000080: Disconnecting JGroups channel cluster ISPN000390: Persisted state, version=<USDversion> timestamp=YYYY-MM-DDTHH:MM:SS ISPN080003: Data Grid Server stopped", "├── bin ├── boot ├── docs ├── lib ├── server └── static", "├── server ├── server1 ├── server2 ├── server3 └── server4", "├── server │ ├── conf │ ├── data │ ├── lib │ └── log" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/server-getting-started
Chapter 7. Assigning permissions using roles and groups
Chapter 7. Assigning permissions using roles and groups Roles and groups have a similar purpose, which is to give users access and permissions to use applications. Groups are a collection of users to which you apply roles and attributes. Roles define specific applications permissions and access control. A role typically applies to one type of user. For example, an organization may include admin , user , manager , and employee roles. An application can assign access and permissions to a role and then assign multiple users to that role so the users have the same access and permissions. For example, the Admin Console has roles that give permission to users to access different parts of the Admin Console. There is a global namespace for roles and each client also has its own dedicated namespace where roles can be defined. 7.1. Creating a realm role Realm-level roles are a namespace for defining your roles. To see the list of roles, click Realm Roles in the menu. Procedure Click Create Role . Enter a Role Name . Enter a Description . Click Save . Add role The description field can be localized by specifying a substitution variable with USD{var-name} strings. The localized value is configured to your theme within the themes property files. See the Server Developer Guide for more details. 7.2. Client roles Client roles are namespaces dedicated to clients. Each client gets its own namespace. Client roles are managed under the Roles tab for each client. You interact with this UI the same way you do for realm-level roles. 7.3. Converting a role to a composite role Any realm or client level role can become a composite role . A composite role is a role that has one or more additional roles associated with it. When a composite role is mapped to a user, the user gains the roles associated with the composite role. This inheritance is recursive so users also inherit any composite of composites. However, we recommend that composite roles are not overused. Procedure Click Realm Roles in the menu. Click the role that you want to convert. From the Action list, select Add associated roles . Composite role The role selection UI is displayed on the page and you can associate realm level and client level roles to the composite role you are creating. In this example, the employee realm-level role is associated with the developer composite role. Any user with the developer role also inherits the employee role. Note When creating tokens and SAML assertions, any composite also has its associated roles added to the claims and assertions of the authentication response sent back to the client. 7.4. Assigning role mappings You can assign role mappings to a user through the Role Mappings tab for that user. Procedure Click Users in the menu. Click the user that you want to perform a role mapping on. Click the Role mappings tab. Click Assign role . Select the role you want to assign to the user from the dialog. Click Assign . Role mappings In the preceding example, we are assigning the composite role developer to a user. That role was created in the Composite Roles topic. Effective role mappings When the developer role is assigned, the employee role associated with the developer composite is displayed with Inherited "True". Inherited roles are the roles explicitly assigned to users and roles that are inherited from composites. 7.5. Using default roles Use default roles to automatically assign user role mappings when a user is created or imported through Identity Brokering . Procedure Click Realm settings in the menu. Click the User registration tab. Default roles This screenshot shows that some default roles already exist. 7.6. Role scope mappings On creation of an OIDC access token or SAML assertion, the user role mappings become claims within the token or assertion. Applications use these claims to make access decisions on the resources controlled by the application. Red Hat build of Keycloak digitally signs access tokens and applications reuse them to invoke remotely secured REST services. However, these tokens have an associated risk. An attacker can obtain these tokens and use their permissions to compromise your networks. To prevent this situation, use Role Scope Mappings . Role Scope Mappings limit the roles declared inside an access token. When a client requests a user authentication, the access token they receive contains only the role mappings that are explicitly specified for the client's scope. The result is that you limit the permissions of each individual access token instead of giving the client access to all the users permissions. By default, each client gets all the role mappings of the user. You can view the role mappings for a client. Procedure Click Clients in the menu. Click the client to go to the details. Click the Client scopes tab. Click the link in the row with Dedicated scope and mappers for this client Click the Scope tab. Full scope By default, the effective roles of scopes are every declared role in the realm. To change this default behavior, toggle Full Scope Allowed to OFF and declare the specific roles you want in each client. You can also use client scopes to define the same role scope mappings for a set of clients. Partial scope 7.7. Groups Groups in Red Hat build of Keycloak manage a common set of attributes and role mappings for each user. Users can be members of any number of groups and inherit the attributes and role mappings assigned to each group. To manage groups, click Groups in the menu. Groups Groups are hierarchical. A group can have multiple subgroups but a group can have only one parent. Subgroups inherit the attributes and role mappings from their parent. Users inherit the attributes and role mappings from their parent as well. If you have a parent group and a child group, and a user that belongs only to the child group, the user in the child group inherits the attributes and role mappings of both the parent group and the child group. The hierarchy of a group is sometimes represented using the group path. The path is the complete list of names that represents the hierarchy of a specific group, from top to bottom and separated by slashes / (similar to files in a File System). For example a path can be /top/level1/level2 which means that top is a top level group and is parent of level1 , which in turn is parent of level2 . This path represents unambiguously the hierarchy for the group level2 . Because of historical reasons Red Hat build of Keycloak, does not escape slashes in the group name itself. Therefore a group named level1/group under top uses the path /top/level1/group , which is misleading. Red Hat build of Keycloak can be started with the option --spi-group-jpa-escape-slashes-in-group-path to true and then the slashes in the name are escaped with the character ~ . The escape char marks that the slash is part of the name and has no hierarchical meaning. The path example would be /top/level1~/group when escaped. bin/kc.[sh|bat] start --spi-group-jpa-escape-slashes-in-group-path=true The following example includes a top-level Sales group and a child North America subgroup. To add a group: Click the group. Click Create group . Enter a group name. Click Create . Click the group name. The group management page is displayed. Group Attributes and role mappings you define are inherited by the groups and users that are members of the group. To add a user to a group: Click Users in the menu. Click the user that you want to perform a role mapping on. If the user is not displayed, click View all users . Click Groups . Click Join Group . Select a group from the dialog. Select a group from the Available Groups tree. Click Join . Join group To remove a group from a user: Click Users in the menu. Click the user to be removed from the group. Click Leave on the group table row. In this example, the user jimlincoln is in the North America group. You can see jimlincoln displayed under the Members tab for the group. Group membership 7.7.1. Groups compared to roles Groups and roles have some similarities and differences. In Red Hat build of Keycloak, groups are a collection of users to which you apply roles and attributes. Roles define types of users, and applications assign permissions and access control to roles. Composite Roles are similar to Groups as they provide the same functionality. The difference between them is conceptual. Composite roles apply the permission model to a set of services and applications. Use composite roles to manage applications and services. Groups focus on collections of users and their roles in an organization. Use groups to manage users. 7.7.2. Using default groups To automatically assign group membership to any users who is created or who is imported through Identity Brokering , you use default groups. Click Realm settings in the menu. Click the User registration tab. Click the Default Groups tab. Default groups This screenshot shows that some default groups already exist.
[ "bin/kc.[sh|bat] start --spi-group-jpa-escape-slashes-in-group-path=true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/assigning_permissions_using_roles_and_groups
2.4. Configuring Cascading Chaining
2.4. Configuring Cascading Chaining The database link can be configured to point to another database link, creating a cascading chaining operation. A cascading chain occurs any time more than one hop is required to access all of the data in a directory tree. 2.4.1. Overview of Cascading Chaining Cascading chaining occurs when more than one hop is required for the directory to process a client application's request. The client application sends a modify request to Server 1. Server one contains a database link that forwards the operation to Server 2, which contains another database link. The database link on Server 2 forwards the operations to server three, which contains the data the clients wants to modify in a database. Two hops are required to access the piece of data the client want to modify. During a normal operation request, a client binds to the server, and then any ACIs applying to that client are evaluated. With cascading chaining, the client bind request is evaluated on Server 1, but the ACIs applying to the client are evaluated only after the request has been chained to the destination server, in the above example Server 2. For example, on Server A, a directory tree is split: The root suffix dc=example,dc=com and ou=people and ou=groups sub-suffixes are stored on Server A. The ou=europe,dc=example,dc=com and ou=groups suffixes are stored in on Server B, and the ou=people branch of the ou=europe,dc=example,dc=com suffix is stored on Server C. With cascading configured on servers A, B, and C, a client request targeted at the ou=people,ou=europe,dc=example,dc=com entry would be routed by the directory as follows: First, the client binds to Server A and chains to Server B using Database Link 1. Then Server B chains to the target database on Server C using Database Link 2 to access the data in the ou=people,ou=europe,dc=example,dc=com branch. Because at least two hops are required for the directory to service the client request, this is considered a cascading chain. 2.4.2. Configuring Cascading Chaining Using the Command Line This section provides an example of how to configure cascading chaining with three servers as shown in the following diagram: Configuration Steps on Server 1 Create the suffix c=africa,ou=people,dc=example,dc=com : Create the DBLink1 database link: Enable loop detection: Configuration Steps on Server 2 Create a proxy administrative user on server 2 for server 1 to use for proxy authorization: Important For security reasons, do not use the cn=Directory Manager account. Create the suffix ou=Zanzibar,c=africa,ou=people,dc=example,dc=com : Create the DBLink2 database link: Because the DBLink2 link is the intermediate database link in the cascading chaining configuration, enable the ACL check to allow the server to check whether it should allow the client and proxy administrative user access to the database link. Enable loop detection: Enable the proxy authorization control: Add the local proxy authorization ACI: Add an ACI that enables users in c=us,ou=people,dc=example,dc=com on server 1 who have a uid attribute set, to perform any type of operation on the ou=Zanzibar,c=africa,ou=people,dc=example,dc=com suffix tree on server 3: If there are users on server 3 under a different suffix that will require additional rights on server 3, it is necessary to add additional client ACIs on server 2. Configuration Steps on Server 3 Create a proxy administrative user on server 3 for server 2 to use for proxy authorization: Important For security reasons, do not use the cn=Directory Manager account. Add the local proxy authorization ACI: Add an ACI that enables users in c=us,ou=people,dc=example,dc=com on server 1 who have a uid attribute set, to perform any type of operation on the ou=Zanzibar,c=africa,ou=people,dc=example,dc=com suffix tree on server 3: If there are users on server 3 under a different suffix that will require additional rights on server 3, it is necessary to add additional client ACIs on server 2. The cascading chaining configuration is now set up. This cascading configuration enables a user to bind to server 1 and modify information in the ou=Zanzibar,c=africa,ou=people,dc=example,dc=com branch on server 3. Depending on your security needs, it can be necessary to provide more detailed access control. 2.4.3. Detecting Loops An LDAP control included with Directory Server prevents loops. When first attempting to chain, the server sets this control to the maximum number of hops, or chaining connections, allowed. Each subsequent server decrements the count. If a server receives a count of 0 , it determines that a loop has been detected and notifies the client application. To use the control, add the 1.3.6.1.4.1.1466.29539.12 OID. For details about adding an LDAP control, see Section 2.3.2.2, "Chaining LDAP Controls" . If the control is not present in the configuration file of each database link, loop detection will not be implemented. The number of hops allowed is defined using the nsHopLimit parameter. By default, the parameter is set to 10 . For example, to set the hop limit of the example chain to 5 :
[ "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com backend create --parent-suffix=\"ou=people,dc=example,dc=com\" --suffix=\"c=africa,ou=people,dc=example,dc=com\"", "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com chaining link-create --suffix=\"c=africa,ou=people,dc=example,dc=com\" --server-url=\"ldap://africa.example.com:389/\" --bind-mech=\"\" --bind-dn=\"cn=server1 proxy admin,cn=config\" --bind-pw=\"password\" --check-aci=\"off\" \"DBLink1\"", "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com chaining config-set --add-control=\"1.3.6.1.4.1.1466.29539.12\"", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: cn=server1 proxy admin,cn=config objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: server1 proxy admin sn: server1 proxy admin userPassword: password description: Entry for use by database links", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com backend create --parent-suffix=\"c=africaou=people,dc=example,dc=com\" --suffix=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\"", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com chaining link-create --suffix=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\" --server-url=\"ldap://zanz.africa.example.com:389/\" --bind-mech=\"\" --bind-dn=\"server2 proxy admin,cn=config\" --bind-pw=\"password\" --check-aci=\"on \"DBLink2\"", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com chaining config-set --add-control=\"1.3.6.1.4.1.1466.29539.12\"", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com chaining config-set --add-control=\"2.16.840.1.113730.3.4.12\"", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: c=africa,ou=people,dc=example,dc=com changetype: modify add: aci aci:(targetattr=\"*\")(target=\"lou=Zanzibar,c=africa,ou=people,dc=example,dc=com\") (version 3.0; acl \"Proxied authorization for database links\"; allow (proxy) userdn = \"ldap:///cn=server1 proxy admin,cn=config\";)", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: c=africa,ou=people,dc=example,dc=com changetype: modify add: aci aci:(targetattr=\"*\")(target=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\") (version 3.0; acl \"Client authorization for database links\"; allow (all) userdn = \"ldap:///uid=*,c=us,ou=people,dc=example,dc=com\";)", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server3.example.com -x dn: cn=server2 proxy admin,cn=config objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: server2 proxy admin sn: server2 proxy admin userPassword: password description: Entry for use by database links", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server3.example.com -x dn: ou=Zanzibar,ou=people,dc=example,dc=com changetype: modify add: aci aci: (targetattr = \"*\")(version 3.0; acl \"Proxied authorization for database links\"; allow (proxy) userdn = \"ldap:///cn=server2 proxy admin,cn=config\";)", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server3.example.com -x dn: ou=Zanzibar,ou=people,dc=example,dc=com changetype: modify add: aci aci: (targetattr =\"*\")(target=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\") (version 3.0; acl \"Client authentication for database link users\"; allow (all) userdn = \"ldap:///uid=*,c=us,ou=people,dc=example,dc=com\";)", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining link-set --hop-limit 5 example" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/creating_and_maintaining_database_links-advanced_feature_configuring_cascading_chaining
Chapter 2. Container security
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.13 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 8 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules Disk encryption Chrony time service About the OpenShift Update Service 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 9 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to Nodes Installation configuration parameters RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Understanding the verification of container images lacking verifiable signatures Each OpenShift Container Platform release image is immutable and signed with a Red Hat production key. During an OpenShift Container Platform update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents. For example, the image references lacking a verifiable signature are contained in the signed OpenShift Container Platform release image: Example release info output USD oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2 1 Signed release image SHA. 2 Container image lacking a verifiable signature included in the release. 2.4.3.1. Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an OpenShift Container Platform update. This is an internal process. An OpenShift Container Platform installation or update fails if the automated verification fails. Verification of signatures can also be done manually using the skopeo command-line utility. Additional resources Introduction to OpenShift Updates 2.4.3.2. Using skopeo to verify signatures of Red Hat container images You can verify the signatures for container images included in an OpenShift Container Platform release image by pulling those signatures from OCP release mirror site . Because the signatures on the mirror site are not in a format readily understood by Podman or CRI-O, you can use the skopeo standalone-verify command to verify that the your release images are signed by Red Hat. Prerequisites You have installed the skopeo command-line utility. Procedure Get the full SHA for your release by running the following command: USD oc adm release info <release_version> \ 1 1 Substitute <release_version> with your release number, for example, 4.14.3 . Example output snippet --- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 --- Pull down the Red Hat release key by running the following command: USD curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt Get the signature file for the specific release that you want to verify by running the following command: USD curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \ 1 1 Replace <sha_from_version> with SHA value from the full link to the mirror site that matches the SHA of your release. For example, the link to the signature for the 4.12.23 release is https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55/signature-1 , and the SHA value is e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Get the manifest for the release image by running the following command: USD skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \ 1 1 Replace <quay_link_to_release> with the output of the oc adm release info command. For example, quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Use skopeo to verify the signature: USD skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key where: <release_number> Specifies the release number, for example 4.14.3 . <arch> Specifies the architecture, for example x86_64 . Example output Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 2.4.4. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7, 8, and 9 ( ubi7/ubi , ubi8/ubi , and ubi9/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal , ubi8/ubi-mimimal , and ubi9/ubi-minimal). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.13 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage or Splunk to allow for later search and analysis. Clair : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file { "default": [{"type": "reject"}], "transports": { "docker": { "access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "atomic": { "172.30.1.1:5000/openshift": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "172.30.1.1:5000/production": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/example.com/pubkey" } ], "172.30.1.1:5000": [{"type": "reject"}] } } } The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or postinstallation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs
[ "variant: openshift version: 4.13.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_and_compliance/container-security-1
Chapter 1. Support overview
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. OpenShift CLI ( oc ) issues : Investigate OpenShift CLI ( oc ) issues by increasing the log level.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/support/support-overview
Chapter 5. Important changes to external kernel parameters
Chapter 5. Important changes to external kernel parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 8.6. These changes could include for example added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters fw_devlink.strict = [KNL] Format: <bool> With this parameter you can treat all inferred dependencies as mandatory dependencies. This setting only applies if fw_devlink=on|rpm . no_hash_pointers With this parameter you can force pointers that are printed to the console or buffers to be unhashed. By default, when a pointer is printed using the %p format string that pointer's value is obscured by hashing. This is a security feature that hides actual kernel addresses from unprivileged users. However, it also makes debugging the kernel more difficult since you cannot compare unequal pointers. If this command-line parameter is specified, then all normal pointers will have their true value printed. Pointers that are printed using the %pK format string can still be hashed. Specify no_hash_pointers only when debugging the kernel and do not use it in production. no_entry_flush = [PPC] With this parameter it is possible to avoid flushing the L1-D cache when entering the kernel. no_uaccess_flush = [PPC] With this parameter it is possible to avoid flushing the L1-D cache after accessing user data. rcutorture.nocbs_nthreads = [KNL] With this parameter you can set the number of Read-copy-update (RCU) callback-offload togglers. The default value is 0 (zero) and it disables toggling. rcutorture.nocbs_toggle = [KNL] With this parameter you can set the delay in milliseconds between successive callback-offload toggling attempts. refscale.verbose_batched = [KNL] With this parameter you can batch the additional printk() statements. You can print everything, by specifying zero (the default) or a negative value. Otherwise, print every Nth verbose statement, where N is the value specified. strict_sas_size = [X86] Format: <bool> With this parameter you can enable or disable strict sigaltstack size checks against the required signal frame size which depends on the supported floating-point unit (FPU) features. You can use this parameter to filter out binaries, which have not yet been made aware of the AT_MINSIGSTKSZ auxiliary vector. torture.verbose_sleep_frequency = [KNL] This parameter specifies how many verbose printk() statements should be emitted between each sleep. The default value of 0 (zero) disables the verbose-printk() sleeping. torture.verbose_sleep_duration = [KNL] This parameter specifies the duration of each verbose-printk() sleep in jiffies. tsc_early_khz = [X86] Format: <unsigned int> This parameter enables to skip the early Time Stamp Counter (TSC) calibration and use the given value instead. The parameter proves useful when the early TSC frequency discovery procedure is not reliable. Such as on overclocked systems with CPUID.16h support and partial CPUID.15h support. Updated kernel parameters amd_iommu = [HW,X86-64] You can pass parameters to the AMD IOMMU driver in the system. Possible values are: fullflush - Enable flushing of IO/TLB entries when they are unmapped. Otherwise they are flushed before they will be reused, which is a lot of faster. off - Do not initialize any AMD IOMMU found in the system. force_isolation - Force device isolation for all devices. The IOMMU driver is not allowed anymore to lift isolation requirements as needed. This option does not override iommu=pt . force_enable - Force enable the IOMMU on platforms known to be buggy with IOMMU enabled. Use this option with care. acpi.debug_level = [HW,ACPI,ACPI_DEBUG] Format: <int> CONFIG_ACPI_DEBUG must be enabled to produce any Advanced Configuration and Power Interface (ACPI) debug output. Bits in debug_layer correspond to a _COMPONENT in an ACPI source file. For example #define _COMPONENT ACPI_EVENTS Bits in debug_level correspond to a level in ACPI_DEBUG_PRINT statements. For example ACPI_DEBUG_PRINT((ACPI_DB_INFO, ... The debug_level mask defaults to "info". See Documentation/acpi/debug.txt for more information about debug layers and levels. Enable processor driver info messages: acpi.debug_layer=0x20000000 Enable AML "Debug" output, for example, stores to the Debug object while interpreting AML: acpi.debug_layer=0xffffffff , acpi.debug_level=0x2 Enable all messages related to ACPI hardware: acpi.debug_layer=0x2 , acpi.debug_level=0xffffffff Some values produce so much output that the system is unusable. The log_buf_len parameter is useful if you need to capture more output. acpi_mask_gpe = [HW,ACPI] Format: <byte> or <bitmap-list> Due to the existence of _Lxx/_Exx , some general purpose events (GPEs) triggered by unsupported hardware or firmware features can result in GPE floodings that cannot be automatically disabled by the GPE dispatcher. You can use this facility to prevent such uncontrolled GPE floodings. cgroup_disable = [KNL] Format: <name of the controller(s) or feature(s) to disable> With this parameter you can disable a particular controller or optional feature. The effects of cgroup_disable = <controller/feature> are: controller/feature is not auto-mounted if you mount all cgroups in a single hierarchy controller/feature is not visible as an individually mountable subsystem if controller/feature is an optional feature then the feature is disabled and corresponding cgroups files are not created Currently only memory controller deals with this and cut the overhead, others just disable the usage. So only cgroup_disable=memory is actually worthy. Specifying "pressure" disables per-cgroup pressure stall information accounting feature. clearcpuid = BITNUM[,BITNUM... ] [X86] With this parameter you can disable CPUID feature X for the kernel. See arch/x86/include/asm/cpufeatures.h for the valid bit numbers. Linux specific bits are not necessarily stable over kernel options, but the vendor specific ones should be. User programs calling CPUID directly or using the feature without checking anything will still see it. This just prevents it from being used by the kernel or shown in /proc/cpuinfo . Also note the kernel could malfunction if you disable some critical bits. iommu.strict = [ARM64, X86] Format: <"0" | "1"> With this parameter you can configure translation look-aside buffer (TLB) invalidation behavior. Possible values are: 0 - lazy mode, requests that use of Direct Memory Access (DMA) unmap operations is deferred 1 - strict mode (default), DMA unmap operations invalidate IOMMU hardware TLBs synchronously. On AMD64 and Intel 64, the default behavior depends on the equivalent driver-specific parameters. However, a strict mode explicitly specified by either method takes precedence. rcutree.use_softirq = [KNL] If this parameter is set to zero, it moves all RCU_SOFTIRQ processing to per-CPU rcuc kthreads. The default is a non-zero value. It means that RCU_SOFTIRQ is used by default. Specify rcutree.use_softirq = 0 to use rcuc kthreads. But note that CONFIG_PREEMPT_RT=y kernels disable this kernel boot parameter (forcibly setting it to zero). rcupdate.rcu_normal_after_boot = [KNL] This parameter enables to use only normal grace-period primitives once boot has completed. That is after the rcu_end_inkernel_boot() call has been invoked. There is no effect on CONFIG_TINY_RCU kernels. The kernels with the CONFIG_PREEMPT_RT=y setting, enable this kernel boot parameter and forcibly they set it to the value one. That is, converting any post-boot attempt at an expedited Read-copy-update (RCU) grace period to instead use normal non-expedited grace-period processing. spectre_v2 = [X86] With this parameter you can control mitigation of Spectre variant 2 (indirect branch speculation) vulnerability. The default operation protects the kernel from user space attacks. Possible values are: on - unconditionally enable, implies spectre_v2_user=on off - unconditionally disable, implies spectre_v2_user=off auto - the kernel detects whether your CPU model is vulnerable Selecting 'on' will, and 'auto' may, choose a mitigation method at run time according to the CPU. The available microcode, the setting of the CONFIG_RETPOLINE configuration option, and the compiler with which the kernel was built. Selecting 'on' will also enable the mitigation against user space to user space task attacks. Selecting 'off' will disable both the kernel and the user space protections. You can also select specific mitigations manually: retpoline - replace indirect branches retpoline,generic - Retpolines retpoline,lfence - LFENCE; indirect branch retpoline,amd - alias for retpoline,lfence eibrs - enhanced indirect branch restricted speculation (IBRS) eibrs,retpoline - enhanced IBRS + Retpolines eibrs,lfence - enhanced IBRS + LFENCE ibrs - use IBRS to protect kernel ibrs_always - use IBRS to protect both kernel and userland retpoline,ibrs_user - replace indirect branches with retpolines and use IBRS to protect userland Not specifying this option is equivalent to spectre_v2=auto .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/kernel_parameters_changes
Chapter 1. Integrating an overcloud with Ceph Storage
Chapter 1. Integrating an overcloud with Ceph Storage Red Hat OpenStack Platform director creates a cloud environment called the overcloud. You can use director to configure extra features for an overcloud, such as integration with Red Hat Ceph Storage. You can integrate your overcloud with Ceph Storage clusters created with director or with existing Ceph Storage clusters. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . 1.1. Red Hat Ceph Storage compatibility RHOSP 16.2 supports connection to external Red Hat Ceph Storage 4 and Red Hat Ceph Storage 5 clusters. 1.2. Deploying the Shared File Systems service with external CephFS You can deploy the Shared File Systems service (manila) with CephFS by using Red Hat OpenStack Platform (RHOSP) director. You can use the Shared File Systems service with the NFS protocol or the native CephFS protocol. Important You cannot use the Shared File Systems service with the CephFS native driver to serve shares to Red Hat OpenShift Container Platform through Manila CSI. Red Hat does not support this type of deployment. For more information, contact Red Hat Support. The Shared File Systems service with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For more information about CSI workload recommendations, see https://access.redhat.com/articles/6667651 . To use native CephFS shared file systems, clients require access to the Ceph public network. When you integrate an overcloud with an existing Ceph Storage cluster, director does not create an isolated storage network to designate as the Ceph public network. This network is assumed to already exist. Do not provide direct access to the Ceph public network; instead, allow tenants to create a router to connect to the Ceph public network. NFS-Ganesha gateway When you use CephFS through the NFS protocol, director deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability by using an active-passive configuration. The NFS-Ganesha gateway is supported with Red Hat Ceph Storage 4.x (Ceph package 14.x) and Red Hat Ceph Storage 5.x (Ceph package 16.x). For information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . You must install the latest version of the ceph-ansible package on the undercloud, as described in Installing the ceph-ansible package . Prerequisites Before you configure the Shared File Systems service with an external Ceph Storage cluster, complete the following prerequisites: Verify that your external Ceph Storage cluster has an active Metadata Server (MDS): The external Ceph Storage cluster must have a CephFS file system that is supported by CephFS data and metadata pools. Verify the pools in the CephFS file system: Note the names of these pools to configure the director parameters, ManilaCephFSDataPoolName and ManilaCephFSMetadataPoolName . For more information about this configuration, see Creating a custom environment file . The external Ceph Storage cluster must have a cephx client name and key for the Shared File Systems service. Verify the keyring: Replace <client name> with your cephx client name. 1.3. Configuring Ceph Object Store to use external Ceph Object Gateway Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone). For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide .
[ "ceph -s", "ceph fs ls", "ceph auth get client.<client name>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_storage_cluster/assembly-integrating-with-ceph-storage_existing-ceph
Chapter 5. Bug fixes
Chapter 5. Bug fixes This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.17. 5.1. Disaster recovery FailOver of applications are hung in FailingOver state Previously, applications were not DR protected successfully because of the errors in protecting required resources to the provided S3 stores. So, failing over such applications resulted in FailingOver state. With this fix, a metric and a related alert is added to the application DR protection health that shows an alert to rectify protection issues after DR protects the applications. As a result, the applications that are successfully protected are failed over. ( BZ#2248723 ) Post hub recovery, applications which were in FailedOver state consistently report FailingOver Previously, after recovering a DR setup from a hub and a ManageCluster loss to a passive hub, applications which were in FailedOver state to the lost ManagedCluster consistently reported FailingOver status. Failing over such applications to the surviving cluster was allowed but required checks were missing on the surviving cluster to ensure that the failover can be initiated. With this fix, Ramen hub operator ensures if the target cluster is ready for a failover operation before initiating the action. As a result, any failover initiated is successful or if stale resources still exist on the failover target cluster, the operator stalls the failover till the stale resources are cleaned up. ( BZ#2247847 ) Post hub recovery, subscription app pods now come up after Failover Previously, post hub recovery, the subscription application pods did not come up after failover from primary to the secondary managed clusters. This caused RBAC error occurs in AppSub subscription resource on managed cluster due to a timing issue in the backup and restore scenario. This issue has been fixed, and subscription app pods now come up after failover from primary to secondary managed clusters. ( BZ#2295782 ) Application namespaces are no longer left behind in managed clusters after deleting the application Previously, if an application was deleted on the RHACM hub cluster and its corresponding namespace was deleted on the managed clusters, the namespace reappeared on the managed cluster. With this fix, once the corresponding namespace is deleted, the application no longer reappears. ( BZ#2059669 ) odf-client-info config map is now created Previously, the controller inside MCO was not properly filtering the ManagedClusterView resource. This lead to a key config map odf-client-info to not be created. With this update, the filtering mechanism has been fixed, and odf-client-info config map is created as expected. ( BZ#2308144 ) 5.2. Multicloud Object Gateway Ability to change log level of backingstore pods Previously, there was no way to change the log level of backingstore pods. With this update, changing the NOOBAA_LOG_LEVEL in the config map will now change the debug level of the pv-pools backingstore pods accordingly. ( BZ#2297448 ) STS token expiration now works as expected Previously, incorrect STS token expiration time calculations and printings caused STS tokens to remain valid long past after their expiration time. Users would see the wrong expiration time when trying to assume a role. With this update, the STS code was revamped and modified to fix the problems, as well as added support for the CLI flag --duration-seconds . Now STS token expiration works as expected, and is shown to the user properly. ( BZ#2299801 ) Block deletion of OBC via regular S3 flow S3 buckets can be created both via object bucket claim (OBC) and directly via the S3 operation. When a bucket is created with an OBC and deleted via S3, it leaves the OBC entity dangling and the state is inconsistent. With this update, deleting an OBC via regular S3 flow is blocked, avoiding an inconsistent state. ( BZ#2301657 ) NooBaa Backingstore no longer stuck in Connecting post upgrade Previoulsy, NooBaa backingstore blocked upgrade as it remained in the Connecting phase leaving the storagecluster.yaml in phase Progressing . This issue has been fixed, and upgrade progresses as expected. ( BZ#2302507 ) NooBaa DB cleanup no longer fails Previously, NooBaa DB's cleanup would stop after DB_CLEANER_BACK_TIME elapsed from the start time of noobaa-core pod. This meant NooBaa DB PVC consumption would rise. This issue has been fixed, and NooBaa DB cleanup works as expected. ( BZ#2305978 ) MCG standalone upgrade working as expected Previously, a bug caused NooBaa pods to have incorrect affinity settings, leaving them stuck in the pending state. This fix ensures that any previously incorrect affinity settings on the NooBaa pods are cleared. Affinity is now only applied when the proper conditions are met, preventing the issue from recurring after the upgrade. After upgrading to the fixed version, the pending NooBaa pods won't automatically restart. To finalize the upgrade, manually delete the old pending pods. The new pods will then start with the correct affinity settings, allowing them to run successfully. ( BZ#2314636 ) 5.3. Ceph New restored or cloned CephFS PVC creation no longer slows down due to parallel clone limit Previously, upon reaching the limit of parallel CephFS clones, the rest of the clones would queue up, slowing down the cloning. With this enhancement, upon reaching the limit of parallel clones at one time, the new clone creation requests are rejected. The default parallel clone creation limit is 4. To increase the limit, contact customer support. ( BZ#2190161 ) 5.4. OpenShift Data Foundation console Pods created in openshift-storage by end users no longer cause errors Previously, when a pod was created in openshift-storage by an end user it would cause the console topology page to break. This was because pods without any ownerReferences were not considered to be part of the design. With this fix, pods without owner references are filtered out, and only pods with correct ownerReferences are shown. This allows for the topology page to work correctly even when pods are arbitrarily added to the openshift-storage namespace. ( BZ#2245068 ) Applying an object bucket claim (OBC) no longer causes an error Previously, when attaching an OBC to a deployment using the OpenShift Web Console, the error Address form errors to proceed was shown even when there were no errors in the form. With this fix, the form validations have been changed, and there is no longer an error. ( BZ#2302575 ) Automatic mounting of service account tokens disabled to increase security By default, OpenShift automatically mounts a service account token into every pod, regardless of whether the pod needs to interact with the OpenShift API. This behavior can expose the pod's service account token to unintended use. If a pod is compromised, the attacker could gain access to this token, leading to possible privilege escalation within the cluster. If the default service account token is unnecessarily mounted, and the pod becomes compromised, the attacker can use the service account credentials to interact with the OpenShift API. This access could lead to serious security breaches, such as unauthorized actions within the cluster, exposure of sensitive information, or privilege escalation across the cluster. To mitigate this vulnerability, the automatic mounting of service account tokens is disabled unless explicitly needed by the application running in the pod. In the case of ODF console pod the fix involved disabling the automatic mounting of the default service account token by setting the automountServiceAccountToken: false in the pod or service account definition. With this fix, pods no longer automatically mount the service account token unless explicitly needed. This reduces the risk of privilege escalation or misuse of the service account in case of a compromised pod. ( BZ#2302857 ) Provider mode clusters no longer have the option to connect to external RHCS cluster Previously, during provider mode deployment there was the option to deploy external RHCS. This resulted in an unsupported deployment. With this fix, connecting to external RHCS is now blocked so users do not end with an unsupported deployment. ( BZ#2312442 ) 5.5. Rook Rook.io Operator no longer gets stuck when removing a mon from quorum Previously, mon quorum could be lost when removing a mon from quorum due to a race condition. This was because there might not have been enough quorum to complete the removal of the mon from quorum. This issue has been fixed, and the Rook.io Operator no longer gets stuck when removing a mon from quorum. ( BZ#2292435 ) Network Fence for non-graceful node shutdown taint no longer blocks volume mount on surviving zone Previously, Rook was creating NetworkFence CR with an incorrect IP address when a node was tainted as out-of-service. Fencing the wrong IP address was blocking the application pods from moving to another node when a taint was added. With this fix, auto NetworkFence has been disabled in Rook when the out-of-service taint is added on the node, and application pods are no longer blocked from moving to another node. ( BZ#2315666 ) 5.6. Ceph monitoring Invalid KMIP configurations now treated as errors Previously, Thales Enterprise Key Management (KMIP) was not added in the recognized KMS services. This meant that whenever an invalid KMIP configuration was provided, it was not treated as an error. With this fix, Thales KMIP service has been added as a valid KMS service. This enables KMS services to propagate KMIP configuration statuses correctly. Therefore, any mis-configurations are treated as errors. ( BZ#2271773 ) 5.7. CSI Driver Pods no longer get stuck during upgrade Previously, if there was a node with an empty label, PVC mount would fail during upgrade. With this fix, nodes labeled with empty value aren't considered for the crush_location mount, so they no longer block PVC mounting. ( BZ#2297265 )
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/4.17_release_notes/bug_fixes
6.3. Known issue with iscsi/mpath/scsi storage volumes
6.3. Known issue with iscsi/mpath/scsi storage volumes It is not possible at the moment with virt-v2v to convert a guest with a storage volume in a pool of any of the following types: iscsi mpath scsi Converting such a guest results in a failed conversion. There is no workaround for this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/known-issue-storage-pool
Chapter 4. Setting up Key Archival and Recovery
Chapter 4. Setting up Key Archival and Recovery For more information on Key Archival and Recovery, see the Archiving, Recovering, and Rotating Keys section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . This chapter explains how to setup the Key Recovery Authority (KRA), previously known as Data Recovery Manager (DRM), to archive private keys and to recover archived keys for restoring encrypted data. Note This chapter only discusses archiving keys through client-side key generation. Server-side key generation and archivals, whether it's initiated through TPS, or through CA's End Entity portal, are not discussed here. For information on smart card key recovery, see Section 6.11, "Setting Up Server-side Key Generation" . For information on server-side key generation provided at the CA's EE portal, see Section 5.2.2, "Generating CSRs Using Server-Side Key Generation" . Note Gemalto SafeNet LunaSA only supports PKI private key extraction in its CKE - Key Export model, and only in non-FIPS mode. The LunaSA Cloning model and the CKE model in FIPS mode do not support PKI private key extraction. When KRA is installed, it joins a security domain, and is paired up with the CA. At such time, it is configured to archive and recover private encryption keys. However, if the KRA certificates are issued by an external CA rather than one of the CAs within the security domain, then the key archival and recovery process must be set up manually. For more information, see the Manually Setting up Key Archival section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . Note In a cloned environment, it is necessary to set up key archival and recovery manually. For more information, see the Updating CA-KRA Connector Information After Cloning section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 4.1. Configuring Agent-Approved Key Recovery in the Console Note While the number of key recovery agents can be configured in the Console, the group to use can only be set directly in the CS.cfg file. The Console uses the Key Recovery Authority Agents Group by default. Open the KRA's console. For example: Click the Key Recovery Authority link in the left navigation tree. Enter the number of agents to use to approve key recover in the Required Number of Agents field. Note For more information on how to configure agent-approved key recovery in the CS.cfg file, see the Configuring Agent-Approved Key Recovery in the Command Line section in the Red Hat Certificate System Planning, Installation, and Deployment Guide .
[ "pkiconsole https://server.example.com:8443/kra" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Key_Recovery_Authority
Chapter 4. Configure OpenStack for Federation
Chapter 4. Configure OpenStack for Federation 4.1. Determine the IP Address and FQDN Settings The following nodes require an assigned Fully-Qualified Domain Name (FQDN): The host running the Dashboard (horizon). The host running the Identity Service (keystone), referenced in this guide as USDFED_KEYSTONE_HOST . Note that more than one host will run a service in a high-availability environment, so the IP address is not a host address but rather the IP address bound to the service. The host running RH-SSO. The host running IdM. The Red Hat OpenStack Platform director deployment does not configure DNS or assign FQDNs to the nodes, however, the authentication protocols (and TLS) require the use of FQDNs. As a result, you must determine the external public IP address of the overcloud. Note that you need the IP address of the overcloud, which is not the same as the IP address allocated to an individual node in the overcloud, such as controller-0 , controller-1 . You will need the external public IP address of the overcloud because IP addresses are assigned to a high availability cluster, instead of an individual node. Pacemaker and HAProxy work together to provide the appearance of a single IP address; this IP address is entirely distinct from the individual IP address of any given node in the cluster. As a result, the correct way to think about the IP address of an OpenStack service is not in terms of which node that service is running on, but rather to consider the effective IP address that the cluster is advertising for that service (for example, the VIP). 4.1.1. Retrieve the IP address In order to determine the correct IP address, you will need to assign a name to it, instead of using DNS. There are two ways to do this: Red Hat OpenStack Platform director uses one common public IP address for all OpenStack services, and separates those services on the single public IP address by port number; if you the know public IP address of one service in the OpenStack cluster then you know all of them (however that does not also tell you the port number of a service). You can examine the Keystone URL in the overcloudrc file located in the ~stack home directory on the undercloud. For example: This tells you that the public keystone IP address is 10.0.0.101 and that keystone is available on port 13000 . By extension, all other OpenStack services are also available on the 10.0.0.101 IP address with their own unique port number. However, the more accurate way of determining the IP addresses and port number information is to examine the HAProxy configuration file ( /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg ), which is located on each of the overcloud nodes. The haproxy.cfg file is an identical copy on each of the overcloud controller nodes; this is essential because Pacemaker will assign one controller node the responsibility of running HAProxy for the cluster, in the event of an HAProxy failure Pacemaker will reassign a different overcloud controller to run HAProxy. No matter which controller node is currently running HAProxy, it must act identically; therefore the haproxy.cfg files must be identical. To examine the haproxy.cfg file, SSH into one of the cluster's controller nodes and review /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg . As noted above it does not matter which controller node you select. The haproxy.cfg file is divided into sections, with each beginning with a listen statement followed by the name of the service. Immediately inside the service section is a bind statement; these are the front IP addresses, some of which are public, and others are internal to the cluster. The server lines are the back IP addresses where the service is actually running, there should be one server line for each controller node in the cluster. To determine the public IP address and port of the service from the multiple bind entries in the section: Red Hat OpenStack Platform director puts the public IP address as the first bind entry. In addition, the public IP address should support TLS, so the bind entry will have the ssl keyword. The IP address should also match the IP address set in the OS_AUTH_URL located in the overstackrc file. For example, here is a sample keystone_public section from a haproxy.cfg : The first bind line has the ssl keyword, and the IP address matches that of the OS_AUTH_URL located in the overstackrc file. As a result, you can be confident that keystone is publicly accessed at the IP address of 10.0.0.101 on port 13000 . The second bind line is internal to the cluster, and is used by other OpenStack services running in the cluster (note that it does not use TLS because it is not public). The mode http setting indicates that the protocol in use is HTTP , this allows HAProxy to examine HTTP headers, among other tasks. The X-Forwarded-Proto lines: These settings require particular attention and will be covered in more detail in Section 4.1.2, "Set the Host Variables and Name the Host" . They guarantee that the HTTP header X-Forwarded-Proto will be set and seen by the back-end server. The back-end server in many cases needs to know if the client was using HTTPS . However, HAProxy terminates TLS so the back-end server will see the connection as non-TLS. The X-Forwarded-Proto HTTP header is a mechanism that allows the back-end server identify which protocol the client was actually using, instead of which protocol the request arrived on. It is essential that a client can not be able to send a X-Forwarded-Proto HTTP header, because that would allow the client to maliciously spoof that the protocol was HTTPS . The X-Forwarded-Proto HTTP header can either be deleted by the proxy when it is received from the client, or the proxy can forcefully set it and so mitigate any malicious use by the client. This is why X-Forwarded-Proto will always be set to one of https or http . The X-Forwarded-For HTTP header is used to track the client, which allows the back-end server to identify who the requesting client was instead of it appearing to be the proxy. This option causes the X-Forwarded-For HTTP header to be inserted into the request: See Section 4.1.2, "Set the Host Variables and Name the Host" for more information on forwarded proto , redirects, ServerName , among others. The following line will confirm that only HTTPS is used on the public IP address: This setting identifies if the request was received on the public IP address (for example 10.0.0.101 ) and it was not HTTPS, then performs a 301 redirect and sets the scheme to HTTPS. HTTP servers (such as Apache) often generate self-referential URLs for redirect purposes. This redirect location must indicate the correct protocol, but if the server is behind a TLS terminator it will think its redirection URL should be HTTP and not HTTPS. This line identifies if a Location header appears in the response that uses the HTTP scheme, then rewrites it to use the HTTPS scheme: 4.1.2. Set the Host Variables and Name the Host You will need to determine the IP address and port to use. In this example the IP address is 10.0.0.101 and the port is 13000 . This value can be confirmed in overcloudrc : And in the keystone_public section of the haproxy.cfg file: You must also give the IP address a FQDN. This example uses overcloud.localdomain . Note that the IP address should be put in the /etc/hosts file since DNS is not being used: Note Red Hat OpenStack Platform director is expected to have already configured the hosts files on the overcloud nodes, but you may need to add the host entry on any external hosts that participate. The USDFED_KEYSTONE_HOST and USDFED_KEYSTONE_HTTPS_PORT must be set in the fed_variables file. Using the above example values: Because Mellon is running on the Apache server that hosts keystone, the Mellon host:port and keystone host:port values will match. Note If you run hostname on one of the controller nodes it will likely be similar to this: controller-0.localdomain , but note that this is its internal cluster name, not its public name. You will instead need to use the public IP address . 4.2. Install Helper Files on undercloud-0 Copy the configure-federation and fed_variables files into the ~stack home directory on undercloud-0 . You will have created these files as part of Section 1.5.3, "Using the Configuration Script" . 4.3. Set your Deployment Variables The file fed_variables contains variables specific to your federation deployment. These variables are referenced in this guide as well as in the configure-federation helper script. Each site-specific federation variable is prefixed with FED_ and (when used as a variable) will use the USD variable syntax, such as USDFED_ . Make sure every FED_ variable in fed_variables is provided a value. 4.4. Copy the Helper Files From undercloud-0 to controller-0 Copy the configure-federation and the edited fed_variables from the ~stack home directory on undercloud-0 to the ~heat-admin home directory on controller-0 . For example: Note You can use the configure-federation script to perform the above step: USD ./configure-federation copy-helper-to-controller 4.5. Initialize the Working Environment on the undercloud On the undercloud node, as the stack user, create the fed_deployment directory. This location will be the file stash. For example: Note You can use the configure-federation script to perform the above step: USD ./configure-federation initialize 4.6. Initialize the Working Environment on controller-0 From the undercloud node, SSH into the controller-0 node as the heat-admin user and create the fed_deployment directory. This location will be the file stash. For example: Note You can use the configure-federation script to perform the above step. From the controller-0 node: USD ./configure-federation initialize 4.7. Install mod_auth_mellon on Each Controller Node From the undercloud node, SSH into the controller-n node as the heat-admin user and install the mod_auth_mellon . For example: Note If mod_auth_mellon is already installed on the controller nodes, you may need to reinstall it again. See the Reinstall mod_auth_mellon note for more details. Note You can use the configure-federation script to perform the above step: USD ./configure-federation install-mod-auth-mellon 4.8. Use the Keystone Version 3 API Before you can use the openstack command line client to administer the overcloud, you will need to configure certain parameters. Normally this is done by sourcing an rc file within your shell session, which sets the required environment variables. Red Hat OpenStack Platform director will have created an overcloudrc file for this purpose in the home directory of the stack user, in the undercloud-0 node. By default, the overcloudrc file is set to use the v2 version of the keystone API, however, federation requires the use of the v3 keystone API. As a result, you need to create a new rc file that uses the v3 keystone API. For example: Write the following contents to overcloudrc.v3 : Note You can use the configure-federation script to perform the above step: USD ./configure-federation create-v3-rcfile From this point forward, to work with the overcloud you will use the overcloudrc.v3 file: 4.9. Add the RH-SSO FQDN to Each Controller The mellon service will be running on each controller node and configured to connect to the RH-SSO IdP. If the FQDN of the RH-SSO IdP is not resolvable through DNS then you will have to manually add the FQDN to the /etc/hosts file on all controller nodes (after the Heat Hosts section): 4.10. Install and Configure Mellon on the Controller Node The keycloak-httpd-client-install tool performs many of the steps needed to configure mod_auth_mellon and have it authenticate against the RH-SSO IdP. The keycloak-httpd-client-install tool should be run on the node where mellon will run. In our case this means mellon will be running on the overcloud controllers protecting Keystone. Note that this is a high availability deployment, and as such there will be multiple overcloud controller nodes, each running identical copies. As a result, the mellon setup will need to be replicated across each controller node. You will approach this by installing and configuring mellon on controller-0 , and then gathering up all the configuration files that the keycloak-httpd-client-install tool created into an archive (for example, a tar file) and then let swift copy the archive over to each controller and unarchive the files there. Run the RH-SSO client installation: Note You can use configure-federation script to perform the above step: USD ./configure-federation client-install After the client RPM installation, you should see output similar to this: 4.11. Edit the Mellon Configuration Additional mellon configuration is required for your deployment: As you will be using a list of groups during the IdP-assertion-to-Keystone mapping phase, the keystone mapping engine expects lists to be in a certain format (one value with items separated by a semicolon (;)). As a result, you must configure mellon so that when it receives multiple values for an attribute, it must know to combine the multiple attributes into a single value with items separated by a semicolon. This mellon directive will address that: To configure this setting in your deployment: Locate the <Location /v3> block and add a line to it. For example: 4.12. Create an Archive of the Generated Configuration Files The mellon configuration needs to be replicated across all controller nodes, so you will create an archive of the files that allows you to install the exact same file contents on each controller node. The archive will be stored in the ~heat-admin/fed_deployment subdirectory. Create the compressed tar archive: Note You can use the configure-federation script to perform the above step: USD ./configure-federation create-sp-archive 4.13. Retrieve the Mellon Configuration Archive On the undercloud-0 node, fetch the archive you just created and extract the files, as you will need access some of the data in subsequent steps (for example the entityID of the RH-SSO IdP). Note You can use the configure-federation script to perform the above step: USD ./configure-federation fetch-sp-archive 4.14. Prevent Puppet From Deleting Unmanaged HTTPD Files By default, the Puppet Apache module will purge any files in the Apache configuration directories it is not managing. This is considered a reasonable precaution, as it prevents Apache from operating in any manner other than the configuration enforced by Puppet. However, this conflicts with the manual configuration of mellon in the HTTPD configuration directories. When the Apache Puppet apache::purge_configs flag is enabled (which it is by default), Puppet will delete files belonging to the mod_auth_mellon RPM when the mod_auth_mellon RPM is installed. It will also delete the configuration files generated by keycloak-httpd-client-install when it is run. Until the mellon files are under Puppet control, you will have to disable the apache::purge_configs flag. You may also want to check if the mod_auth_mellon configuration files have already been removed in a run of overcloud_deploy, see Reinstall mod_auth_mellon for more information. Note Disabling the apache::purge_configs flag opens the controller nodes to vulnerabilities. Do not forget to re-enable it when Puppet adds support for managing mellon. To override the apache::purge_configs flag, create a Puppet file containing the override and add the override file to the list of Puppet files used when overcloud_deploy.sh is run. Create the file fed_deployment/puppet_override_apache.yaml and add this content: Add the file near the end of the overcloud_deploy.sh script. It should be the last -e argument. For example: Note You can use the configure-federation script to perform the above step: USD ./configure-federation puppet-override-apache 4.15. Configure Keystone for Federation This guide uses keystone domains, which require some extra configuration. If enabled, the keystone Puppet module can perform this extra configuration step. In one of the Puppet YAML files, add the following: Some additional values must be set in /etc/keystone/keystone.conf to enable federation: auth:methods federation:trusted_dashboard federation:sso_callback_template federation:remote_id_attribute An explanation of these configuration settings and their suggested values: auth:methods - A list of allowed authentication methods. By default the list is: ['external', 'password', 'token', 'oauth1'] . You will need to enable SAML using the mapped method, so this value should be: external,password,token,oauth1,mapped . federation:trusted_dashboard - A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of this list. This configuration option may be repeated for multiple values. You must set this in order to use web-based SSO flows. For this deployment the value would be: https://USDFED_KEYSTONE_HOST/dashboard/auth/websso/ Note that the host is USDFED_KEYSTONE_HOST only because Red Hat OpenStack Platform director co-locates both keystone and horizon on the same host. If horizon is running on a different host to keystone, then you will need to adjust accordingly. federation:sso_callback_template - The absolute path to a HTML file used as a Single Sign-On callback handler. This page is expected to redirect the user from keystone back to a trusted dashboard host, by form encoding a token in a POST request. Keystone's default value should be sufficient for most deployments: /etc/keystone/sso_callback_template.html federation:remote_id_attribute - The value used to obtain the entity ID of the Identity Provider. For mod_auth_mellon you will use MELLON_IDP . Note that this is set in the mellon configuration file using the MellonIdP IDP directive. Create the fed_deployment/puppet_override_keystone.yaml file with this content: Towards the end of the overcloud_deploy.sh script, add the file you just created. It should be the last -e argument. For example: Note You can use the configure-federation script to perform the above step: USD ./configure-federation puppet-override-keystone 4.16. Deploy the Mellon Configuration Archive You will use swift artifacts to install the mellon configuration files on each controller node. For example: Note You can use the configure-federation script to perform the above step: USD ./configure-federation deploy-mellon-configuration 4.17. Redeploy the Overcloud In earlier steps you made changes to the Puppet YAML configuration files and swift artifacts. These changes can now be applied using this command: Note In later steps, other configuration changes will be made to the overcloud controller nodes. Re-running Puppet using the overcloud_deploy.sh script may overwrite some of these changes. You should avoid applying the Puppet configuration from this point forward to avoid losing any manual edits that were made to the configuration files on the overcloud controller nodes. 4.18. Use Proxy Persistence for Keystone on Each Controller With high availability, any one of the multiple back-end servers can be expected to field a request. Because of the number of redirections used by SAML, and the fact each of those redirections involves state information, it is vital that the same server processes all the transactions. In addition, a session will be established by mod_auth_mellon . Currently mod_auth_mellon is not capable of sharing its state information across multiple servers, so you must configure HAProxy to always direct requests from a client to the same server each time. HAProxy can bind a client to the same server using either affinity or persistence. This article on HAProxy Sticky Sessions provides valuable background material. The difference between persistence and affinity is that affinity is used when information from a layer below the application layer is used to pin a client request to a single server. Persistence is used when the application layer information binds a client to a single server sticky session. The main advantage of persistence over affinity is that it is much more accurate. Persistence is implemented through the use of cookies. The HAProxy cookie directive names the cookie that will be used for persistence, along with parameters controlling its use. The HAProxy server directive has a cookie option that sets the value of the cookie, which should be set to the name of the server. If an incoming request does not have a cookie identifying the back-end server, then HAProxy selects a server based on its configured balancing algorithm. HAProxy ensures that the cookie is set to the name of the selected server in the response. If the incoming request has a cookie identifying a back-end server then HAProxy automatically selects that server to handle the request. To enable persistence in the keystone_public block of the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg configuration file, add this line: This setting states that SERVERID will be the name of the persistence cookie. , you must edit each server line and add cookie <server-name> as an additional option. For example: Note that the other parts of the server directive have been omitted for clarity. 4.19. Create Federated Resources You might recall from the introduction that you are going to follow the federation example in the Create keystone groups and assign roles section of the keystone federation documentation. Perform the following steps on the undercloud node as the stack user (after sourcing the overcloudrc.v3 file): Note You can use the configure-federation script to perform the above step: USD ./configure-federation create-federated-resources 4.20. Create the Identity Provider in OpenStack The IdP needs to be registered in keystone, which creates a binding between the entityID in the SAML assertion and the name of the IdP in keystone. You will need to locate the entityID of the RH-SSO IdP. This value is located in the IdP metadata which was obtained when keycloak-httpd-client-install was run. The IdP metadata is stored in the /var/lib/config-data/puppet-generated/keystone/etc/httpd/saml2/v3_keycloak_USDFED_RHSSO_REALM_idp_metadata.xml file. In an earlier step you retrieved the mellon configuration archive and extracted it to the fed_deployment work area. As a result, you can find the IdP metadata in fed_deployment/var/lib/config-data/puppet-generated/keystone/etc/httpd/saml2/v3_keycloak_USDFED_RHSSO_REALM_idp_metadata.xml . In the IdP metadata file, you will find a <EntityDescriptor> element with a entityID attribute. You need the value of the entityID attribute, and for example purposes this guide will assume it has been stored in the USDFED_IDP_ENTITY_ID variable. You can name your IdP rhsso , which is assigned to the variable USDFED_OPENSTACK_IDP_NAME . For example: Note You can use the configure-federation script to perform the above step: USD ./configure-federation openstack-create-idp 4.21. Create the Mapping File and Upload to Keystone Keystone performs a mapping to match the IdP's SAML assertion into a format that keystone can understand. The mapping is performed by keystone's mapping engine and is based on a set of mapping rules that are bound to the IdP. These are the mapping rules used in this example (as described in the introduction): This mapping file contains only one rule. Rules are divided into two parts: local and remote . The mapping engine works by iterating over the list of rules until one matches, and then executing it. A rule is considered a match only if all the conditions in the remote part of the rule match. In this example the remote conditions specify: The assertion must contain a value called MELLON_NAME_ID . The assertion must contain a values called MELLON_groups and at least one of the groups in the group list must be openstack-users . If the rule matches, then: The keystone user name will be assigned the value from MELLON_NAME_ID . The user will be assigned to the keystone group federated_users in the Default domain. In summary, if the IdP successfully authenticates the user, and the IdP asserts that user belongs to the group openstack-users , then keystone will allow that user to access OpenStack with the privileges bound to the federated_users group in keystone. 4.21.1. Create the mapping To create the mapping in keystone, create a file containing the mapping rules and then upload it into keystone, giving it a reference name. Create the mapping file in the fed_deployment directory (for example, in fed_deployment/mapping_USD{FED_OPENSTACK_IDP_NAME}_saml2.json ), and assign the name USDFED_OPENSTACK_MAPPING_NAME to the mapping rules. For example: Note You can use the configure-federation script to perform the above procedure as two steps: create-mapping - creates the mapping file. openstack-create-mapping - performs the upload of the file. 4.22. Create a Keystone Federation Protocol Keystone uses the Mapped protocol to bind an IdP to a mapping. To establish this binding: Note You can use the configure-federation script to perform the above step: USD ./configure-federation openstack-create-protocol 4.23. Fully-Qualify the Keystone Settings On each controller node, edit /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/10-keystone_wsgi_main.conf to confirm that the ServerName directive inside the VirtualHost block includes the HTTPS scheme, the public hostname, and the public port. You must also enable the UseCanonicalName directive. For example: Note Be sure to substitute the USDFED_ variables with the values specific to your deployment. 4.24. Configure Horizon to Use Federation On each controller node, edit /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and make sure the following configuration values are set: Note Be sure to substitute the USDFED_ variables with the values specific to your deployment. 4.25. Configure Horizon to Use the X-Forwarded-Proto HTTP Header On each controller node, edit /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and uncomment the line: Note You must restart a container for configuration changes to take effect.
[ "export OS_AUTH_URL=https://10.0.0.101:13000/v2.0", "listen keystone_public bind 10.0.0.101:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem bind 172.17.1.19:5000 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option forwardfor redirect scheme https code 301 if { hdr(host) -i 10.0.0.101 } !{ ssl_fc } rsprep ^Location:\\ http://(.*) Location:\\ https://\\1 server controller-0.internalapi.localdomain 172.17.1.13:5000 check fall 5 inter 2000 rise 2 cookie controller-0.internalapi.localdomain server controller-1.internalapi.localdomain 172.17.1.22:5000 check fall 5 inter 2000 rise 2 cookie controller-1.internalapi.localdomain", "http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc }", "option forwardfor", "redirect scheme https code 301 if { hdr(host) -i 10.0.0.101 } !{ ssl_fc }", "rsprep ^Location:\\ http://(.*) Location:\\ https://\\1", "export OS_AUTH_URL=https://10.0.0.101:13000/v2.0", "bind 10.0.0.101:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem", "10.0.0.101 overcloud.localdomain # FQDN of the external VIP", "FED_KEYSTONE_HOST=\"overcloud.localdomain\" FED_KEYSTONE_HTTPS_PORT=13000", "scp configure-federation fed_variables heat-admin@controller-0:/home/heat-admin", "su - stack mkdir fed_deployment", "ssh heat-admin@controller-0 mkdir fed_deployment", "ssh heat-admin@controller-n # replace n with controller number sudo dnf reinstall mod_auth_mellon", "source overcloudrc NEW_OS_AUTH_URL=`echo USDOS_AUTH_URL | sed 's!v2.0!v3!'`", "for key in \\USD( set | sed 's!=.*!!g' | grep -E '^OS_') ; do unset USDkey ; done export OS_AUTH_URL=USDNEW_OS_AUTH_URL export OS_USERNAME=USDOS_USERNAME export OS_PASSWORD=USDOS_PASSWORD export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_PROJECT_NAME=USDOS_TENANT_NAME export OS_IDENTITY_API_VERSION=3", "ssh undercloud-0 su - stack source overcloudrc.v3", "ssh heat-admin@controller-n sudo vi /etc/hosts Add this line (substituting the variables) before this line: HEAT_HOSTS_START - Do not edit manually within this section! HEAT_HOSTS_END USDFED_RHSSO_IP_ADDR USDFED_RHSSO_FQDN", "ssh heat-admin@controller-0 USD dnf -y install keycloak-httpd-client-install USD sudo keycloak-httpd-client-install --client-originate-method registration --mellon-https-port USDFED_KEYSTONE_HTTPS_PORT --mellon-hostname USDFED_KEYSTONE_HOST --mellon-root /v3 --keycloak-server-url USDFED_RHSSO_URL --keycloak-admin-password USDFED_RHSSO_ADMIN_PASSWORD --app-name v3 --keycloak-realm USDFED_RHSSO_REALM -l \"/v3/auth/OS-FEDERATION/websso/mapped\" -l \"/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/mapped/websso\" -l \"/v3/OS-FEDERATION/identity_providers/rhsso/protocols/mapped/auth\"", "[Step 1] Connect to Keycloak Server [Step 2] Create Directories [Step 3] Set up template environment [Step 4] Set up Service Provider X509 Certificiates [Step 5] Build Mellon httpd config file [Step 6] Build Mellon SP metadata file [Step 7] Query realms from Keycloak server [Step 8] Create realm on Keycloak server [Step 9] Query realm clients from Keycloak server [Step 10] Get new initial access token [Step 11] Creating new client using registration service [Step 12] Enable saml.force.post.binding [Step 13] Add group attribute mapper to client [Step 14] Add Redirect URIs to client [Step 15] Retrieve IdP metadata from Keycloak server [Step 16] Completed Successfully", "MellonMergeEnvVars On \";\"", "vi /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf", "<Location /v3> MellonMergeEnvVars On \";\" </Location>", "mkdir fed_deployment tar -cvzf rhsso_config.tar.gz --exclude '*.orig' --exclude '*~' /var/lib/config-data/puppet-generated/keystone/etc/httpd/saml2 /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf", "scp heat-admin@controller-0:/home/heat-admin/fed_deployment/rhsso_config.tar.gz ~/fed_deployment tar -C fed_deployment -xvf fed_deployment/rhsso_config.tar.gz", "parameter_defaults: ControllerExtraConfig: apache::purge_configs: false", "-e /home/stack/fed_deployment/puppet_override_apache.yaml --log-file overcloud_deployment_14.log &> overcloud_install.log", "keystone::using_domain_config: true", "parameter_defaults: controllerExtraConfig: keystone::using_domain_config: true keystone::config::keystone_config: identity/domain_configurations_from_database: value: true auth/methods: value: external,password,token,oauth1,mapped federation/trusted_dashboard: value: https://USDFED_KEYSTONE_HOST/dashboard/auth/websso/ federation/sso_callback_template: value: /etc/keystone/sso_callback_template.html federation/remote_id_attribute: value: MELLON_IDP", "-e /home/stack/fed_deployment/puppet_override_keystone.yaml --log-file overcloud_deployment_14.log &> overcloud_install.log", "source ~/stackrc upload-swift-artifacts -f fed_deployment/rhsso_config.tar.gz", "./overcloud_deploy.sh", "cookie SERVERID insert indirect nocache", "server controller-0 cookie controller-0 server controller-1 cookie controller-1", "openstack domain create federated_domain openstack project create --domain federated_domain federated_project openstack group create federated_users --domain federated_domain openstack role add --group federated_users --group-domain federated_domain --domain federated_domain _member_ openstack role add --group federated_users --group-domain federated_domain --project federated_project _member_", "openstack identity provider create --remote-id USDFED_IDP_ENTITY_ID USDFED_OPENSTACK_IDP_NAME", "[ { \"local\": [ { \"user\": { \"name\": \"{0}\" }, \"group\": { \"domain\": { \"name\": \"federated_domain\" }, \"name\": \"federated_users\" } } ], \"remote\": [ { \"type\": \"MELLON_NAME_ID\" }, { \"type\": \"MELLON_groups\", \"any_one_of\": [\"openstack-users\"] } ] } ]", "openstack mapping create --rules fed_deployment/mapping_rhsso_saml2.json USDFED_OPENSTACK_MAPPING_NAME", "./configure-federation create-mapping ./configure-federation openstack-create-mapping", "openstack federation protocol create --identity-provider USDFED_OPENSTACK_IDP_NAME --mapping USDFED_OPENSTACK_MAPPING_NAME mapped\"", "<VirtualHost> ServerName https:USDFED_KEYSTONE_HOST:USDFED_KEYSTONE_HTTPS_PORT UseCanonicalName On </VirtualHost>", "OPENSTACK_KEYSTONE_URL = \"https://USDFED_KEYSTONE_HOST:USDFED_KEYSTONE_HTTPS_PORT/v3\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"_member_\" WEBSSO_ENABLED = True WEBSSO_INITIAL_CHOICE = \"mapped\" WEBSSO_CHOICES = ( (\"mapped\", _(\"RH-SSO\")), (\"credentials\", _(\"Keystone Credentials\")), )", "#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/federate_with_identity_service/steps
3.7. Software Collection MANPATH Support
3.7. Software Collection MANPATH Support To allow the man command on the system to display man pages from the enabled Software Collection, update the MANPATH environment variable with the paths to the man pages that are associated with the Software Collection. To update the MANPATH environment variable, add the following to the %install section of the Software Collection spec file: %install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export MANPATH="%{_mandir}:\USD{MANPATH:-}" EOF This configures the enable scriptlet to update the MANPATH environment variable. The man pages associated with the Software Collection are then not visible as long as the Software Collection is not enabled. The Software Collection can provide a wrapper script that is visible to the system to enable the Software Collection, for example in the /usr/bin/ directory. In this case, ensure that the man pages are visible to the system even if the Software Collection is disabled. To allow the man command on the system to display man pages from the disabled Software Collection, update the MANPATH environment variable with the paths to the man pages associated with the Software Collection. Procedure 3.7. Updating the MANPATH environment variable for the disabled Software Collection To update the MANPATH environment variable, create a custom script /etc/profile.d/ name.sh . The script is preloaded when a shell is started on the system. For example, create the following file: Use the manpage.sh short script that modifies the MANPATH variable to refer to your man path directory: Add the file to your Software Collection package's spec file: SOURCE2: %{?scl_prefix}manpage.sh Install this file into the system /etc/profile.d/ directory by adjusting the %install section of the Software Collection package's spec file: %install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/
[ "%install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export MANPATH=\"%{_mandir}:\\USD{MANPATH:-}\" EOF", "%{?scl_prefix}manpage.sh", "export MANPATH=\"/opt/ provider / software_collection/path/to/your/man_pages :USD{MANPATH}\"", "SOURCE2: %{?scl_prefix}manpage.sh", "%install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-software_collection_manpath_support
Chapter 5. Integrating by using generic webhooks
Chapter 5. Integrating by using generic webhooks With Red Hat Advanced Cluster Security for Kubernetes, you can send alert notifications as JSON messages to any webhook receiver. When a violation occurs, Red Hat Advanced Cluster Security for Kubernetes makes an HTTP POST request on the configured URL. The POST request body includes JSON-formatted information about the alert. The webhook POST request's JSON data includes a v1.Alert object and any custom fields that you configure, as shown in the following example: { "alert": { "id": "<id>", "time": "<timestamp>", "policy": { "name": "<name>", ... }, ... }, "<custom_field_1>": "<custom_value_1>" } You can create multiple webhooks. For example, you can create one webhook for receiving all audit logs and another webhook for alert notifications. To forward alerts from Red Hat Advanced Cluster Security for Kubernetes to any webhook receiver: Set up a webhook URL to receive alerts. Use the webhook URL to set up notifications in Red Hat Advanced Cluster Security for Kubernetes. Identify the policies you want to send notifications for, and update the notification settings for those policies. 5.1. Configuring integrations by using webhooks Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the webhook URL. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Generic Webhook . Click New integration . Enter a name for Integration name . Enter the webhook URL in the Endpoint field. If your webhook receiver uses an untrusted certificate, enter a CA certificate in the CA certificate field. Otherwise, leave it blank. Note The server certificate used by the webhook receiver must be valid for the endpoint DNS name. You can click Skip TLS verification to ignore this validation. Red Hat does not suggest turning off TLS verification. Without TLS verification, data could be intercepted by an unintended recipient. Optional: Click Enable audit logging to receive alerts about all the changes made in Red Hat Advanced Cluster Security for Kubernetes. Note Red Hat suggests using separate webhooks for alerts and audit logs to handle these messages differently. To authenticate with the webhook receiver, enter details for one of the following: Username and Password for basic HTTP authentication Custom Header , for example: Authorization: Bearer <access_token> Use Extra fields to include additional key-value pairs in the JSON object that Red Hat Advanced Cluster Security for Kubernetes sends. For example, if your webhook receiver accepts objects from multiple sources, you can add "source": "rhacs" as an extra field and filter on this value to identify all alerts from Red Hat Advanced Cluster Security for Kubernetes. Select Test to send a test message to verify that the integration with your generic webhook is working. Select Save to create the configuration. 5.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the webhook notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment.
[ "{ \"alert\": { \"id\": \"<id>\", \"time\": \"<timestamp>\", \"policy\": { \"name\": \"<name>\", }, }, \"<custom_field_1>\": \"<custom_value_1>\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-using-generic-webhooks
4.5. RHEA-2012:0829 - new packages: ipset and libmnl
4.5. RHEA-2012:0829 - new packages: ipset and libmnl New ipset and libmnl packages are now available for Red Hat Enterprise Linux 6. The ipset packages provide IP sets, a framework inside the Linux 2.4.x and 2.6.x kernel, which can be administered by the ipset utility. Depending on the type, an IP set can currently store IP addresses, TCP/UDP port numbers or IP addresses with MAC addresses in a way that ensures high speed when matching an entry against a set. The libmnl packages required by the ipset packages provide a minimalistic user-space library oriented to Netlink developers. The library provides functions to make socket handling, message building, validating, parsing, and sequence tracking easier. This enhancement update adds the ipset and libmnl packages to Red Hat Enterprise Linux 6. (BZ# 477115 , BZ# 789346 ) All users who require ipset and libmnl are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0829
Chapter 1. Overview of model registries
Chapter 1. Overview of model registries Important Model registry is currently available in Red Hat OpenShift AI 2.18 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . A model registry is an important component in the lifecycle of an artificial intelligence/machine learning (AI/ML) model, and a vital part of any machine learning operations (MLOps) platform or ML workflow. A model registry acts as a central repository, holding metadata related to machine learning models from inception to deployment. This metadata ranges from high-level information like the deployment environment and project origins, to intricate details like training hyperparameters, performance metrics, and deployment events. A model registry acts as a bridge between model experimentation and serving, offering a secure, collaborative metadata store interface for stakeholders of the ML lifecycle. Model registries provide a structured and organized way to store, share, version, deploy, and track models. To use model registries in OpenShift AI, an OpenShift cluster administrator must configure the model registry component. For more information, see Configuring the model registry component . After the model registry component is configured, an OpenShift AI administrator can create model registries in OpenShift AI and grant model registry access to the data scientists that will work with them. For more information, see Managing model registries . Data scientists with access to a model registry can store, share, version, deploy, and track models using the model registry feature. For more information, see Working with model registries .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_model_registries/overview-of-model-registries_working-model-registry
Chapter 3. Working with host groups
Chapter 3. Working with host groups A host group acts as a template for common host settings. Instead of defining the settings individually for each host, use host groups to define common settings once and apply them to multiple hosts. 3.1. Host group settings and nested host groups A host group can define many settings for hosts, such as lifecycle environment, content view, or Ansible roles that are available to the hosts. Important When you change the settings of an existing host group, the new settings do not propagate to the hosts assigned to the host group. Only Puppet class settings get updated on hosts after you change them in the host group. You can create a hierarchy of host groups. Aim to have one base level host group that represents all hosts in your organization and provides general settings, and then nested groups that provide specific settings. Satellite applies host settings in the following order when nesting host groups: Host settings take priority over host group settings. Nested host group settings take priority over parent host group settings. Example 3.1. Nested host group hierarchy You create a top-level host group named Base and two nested host groups named Webserver and Storage . The nested host groups are associated with multiple hosts. You also create host custom.example.com that is not associated with any host group. You define the operating system on the top-level host group ( Base ) and Ansible roles on the nested host groups ( Webservers and Storage ). Top-level host group Nested host group Hosts Settings inherited from host groups Base This host group applies the Red Hat Enterprise Linux 8.8 operating system setting. Webservers This host group applies the linux-system-roles.selinux Ansible role. webserver1.example.com Hosts use the following settings: Red Hat Enterprise Linux 8.8 defined by host group Base linux-system-roles.selinux defined by host group Webservers webserver2.example.com Storage This host group applies the linux-system-roles.postfix Ansible role. storage1.example.com Hosts use the following settings: Red Hat Enterprise Linux 8.8 defined by host group Base linux-system-roles.postfix defined by host group Storage storage2.example.com [No host group] custom.example.com No settings inherited from host groups. Example 3.2. Nested host group settings You create a top-level host group named Base and two nested host groups named Webserver and Storage . You also create host custom.example.com that is associated with the top-level host group Base , but no nested host group. You define different values for the operating system and Ansible role settings on the top-level host group ( Base ) and nested host groups ( Webserver and Storage ). Top-level host group Nested host group Host Settings inherited from host groups Base This host group applies these settings: The Red Hat Enterprise Linux 8.8 operating system The linux-system-roles.selinux Ansible role Webservers This host group applies these settings: The Red Hat Enterprise Linux 8.9 operating system No Ansible role webserver1.example.com Hosts use the following settings: The Red Hat Enterprise Linux 8.9 operating system from host group Webservers The linux-system-roles.selinux Ansible role from host group Base webserver2.example.com Storage This host group applies these settings: No operating system The linux-system-roles.postfix Ansible role storage1.example.com Hosts use the following settings: The Red Hat Enterprise Linux 8.8 operating system from host group Base The linux-system-roles.postfix Ansible role from host group Storage storage2.example.com [No nested host group] custom.example.com Host uses the following settings: The Red Hat Enterprise Linux 8.8 operating system from host group Base The linux-system-roles.selinux Ansible role from host group Base 3.2. Creating a host group Create a host group to be able to apply host settings to multiple hosts. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Configure > Host Groups and click Create Host Group . If you have an existing host group that you want to inherit attributes from, you can select a host group from the Parent list. If you do not, leave this field blank. Enter a Name for the new host group. Enter any further information that you want future hosts to inherit. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove. Click the additional tabs and add any details that you want to attribute to the host group. Note Puppet fails to retrieve the Puppet CA certificate while registering a host with a host group associated with a Puppet environment created inside a Production environment. To create a suitable Puppet environment to be associated with a host group, manually create a directory: Click Submit to save the host group. CLI procedure Create the host group with the hammer hostgroup create command. For example: 3.3. Creating a host group for each lifecycle environment Use this procedure to create a host group for the Library lifecycle environment and add nested host groups for other lifecycle environments. Procedure To create a host group for each lifecycle environment, run the following Bash script: MAJOR=" My_Major_Operating_System_Version " ARCH=" My_Architecture " ORG=" My_Organization " LOCATIONS=" My_Location " PTABLE_NAME=" My_Partition_Table " DOMAIN=" My_Domain " hammer --output csv --no-headers lifecycle-environment list --organization "USD{ORG}" | cut -d ',' -f 2 | while read LC_ENV; do [[ USD{LC_ENV} == "Library" ]] && continue hammer hostgroup create --name "rhel-USD{MAJOR}server-USD{ARCH}-USD{LC_ENV}" \ --architecture "USD{ARCH}" \ --partition-table "USD{PTABLE_NAME}" \ --domain "USD{DOMAIN}" \ --organizations "USD{ORG}" \ --query-organization "USD{ORG}" \ --locations "USD{LOCATIONS}" \ --lifecycle-environment "USD{LC_ENV}" done 3.4. Adding a host to a host group You can add a host to a host group in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Select the host group from the Host Group list. Click Submit . Verification The Details card under the Overview tab now shows the host group your host belongs to. 3.5. Changing the host group of a host Use this procedure to change the Host Group of a host. If you reprovision a host after changing the host group, the fresh values that the host inherits from the host group will be applied. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Select the new host group from the Host Group list. Click Submit . Verification The Details card under the Overview tab now shows the host group your host belongs to.
[ "mkdir /etc/puppetlabs/code/environments/ example_environment", "hammer hostgroup create --name \"Base\" --architecture \"My_Architecture\" --content-source-id _My_Content_Source_ID_ --content-view \"_My_Content_View_\" --domain \"_My_Domain_\" --lifecycle-environment \"_My_Lifecycle_Environment_\" --locations \"_My_Location_\" --medium-id _My_Installation_Medium_ID_ --operatingsystem \"_My_Operating_System_\" --organizations \"_My_Organization_\" --partition-table \"_My_Partition_Table_\" --puppet-ca-proxy-id _My_Puppet_CA_Proxy_ID_ --puppet-environment \"_My_Puppet_Environment_\" --puppet-proxy-id _My_Puppet_Proxy_ID_ --root-pass \"My_Password\" --subnet \"_My_Subnet_\"", "MAJOR=\" My_Major_Operating_System_Version \" ARCH=\" My_Architecture \" ORG=\" My_Organization \" LOCATIONS=\" My_Location \" PTABLE_NAME=\" My_Partition_Table \" DOMAIN=\" My_Domain \" hammer --output csv --no-headers lifecycle-environment list --organization \"USD{ORG}\" | cut -d ',' -f 2 | while read LC_ENV; do [[ USD{LC_ENV} == \"Library\" ]] && continue hammer hostgroup create --name \"rhel-USD{MAJOR}server-USD{ARCH}-USD{LC_ENV}\" --architecture \"USD{ARCH}\" --partition-table \"USD{PTABLE_NAME}\" --domain \"USD{DOMAIN}\" --organizations \"USD{ORG}\" --query-organization \"USD{ORG}\" --locations \"USD{LOCATIONS}\" --lifecycle-environment \"USD{LC_ENV}\" done" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/Working_with_Host_Groups_managing-hosts
Chapter 19. Determining Certificate System Product Version
Chapter 19. Determining Certificate System Product Version The Red Hat Certificate System product version is stored in the /usr/share/pki/CS_SERVER_VERSION file. To display the version: To find the product version of a running server, access the following URLs from your browser: http:// host_name : port_number /ca/admin/ca/getStatus http:// host_name : port_number /kra/admin/kra/getStatus http:// host_name : port_number /ocsp/admin/ocsp/getStatus http:// host_name : port_number /tks/admin/tks/getStatus http:// host_name : port_number /tps/admin/tps/getStatus Note Note that each component is a separate package and thus could have a separate version number. The above will show the version number for each currently running component.
[ "cat /usr/share/pki/CS_SERVER_VERSION Red Hat Certificate System 10.0 (Batch Update 1)" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/determining_certificate_system_product_version
Chapter 155. KafkaRebalance schema reference
Chapter 155. KafkaRebalance schema reference Property Property type Description spec KafkaRebalanceSpec The specification of the Kafka rebalance. status KafkaRebalanceStatus The status of the Kafka rebalance.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaRebalance-reference
4.3.3. Displaying Volume Groups
4.3.3. Displaying Volume Groups There are two commands you can use to display properties of LVM volume groups: vgs and vgdisplay . The vgscan command, which scans all the disks for volume groups and rebuilds the LVM cache file, also displays the volume groups. For information on the vgscan command, see Section 4.3.4, "Scanning Disks for Volume Groups to Build the Cache File" . The vgs command provides volume group information in a configurable form, displaying one line per volume group. The vgs command provides a great deal of format control, and is useful for scripting. For information on using the vgs command to customize your output, see Section 4.9, "Customized Reporting for LVM" . The vgdisplay command displays volume group properties (such as size, extents, number of physical volumes, etc.) in a fixed form. The following example shows the output of a vgdisplay command for the volume group new_vg . If you do not specify a volume group, all existing volume groups are displayed.
[ "vgdisplay new_vg --- Volume group --- VG Name new_vg System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 11 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 3 Act PV 3 VG Size 51.42 GB PE Size 4.00 MB Total PE 13164 Alloc PE / Size 13 / 52.00 MB Free PE / Size 13151 / 51.37 GB VG UUID jxQJ0a-ZKk0-OpMO-0118-nlwO-wwqd-fD5D32" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_display
Chapter 2. Configuring external PostgreSQL databases
Chapter 2. Configuring external PostgreSQL databases As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart. Note Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases. By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead. 2.1. Configuring an external PostgreSQL instance using the Operator You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database. Prerequisites You are using a supported version of PostgreSQL. For more information, see the Product life cycle page . You have the following details: db-host : Denotes your PostgreSQL instance Domain Name System (DNS) or IP address db-port : Denotes your PostgreSQL instance port number, such as 5432 username : Denotes the user name to connect to your PostgreSQL instance password : Denotes the password to connect to your PostgreSQL instance You have installed the Red Hat Developer Hub Operator. Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation. Note By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance. Procedure Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection: cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF 1 Provide the name of the certificate secret. 2 Provide the CA certificate key. 3 Optional: Provide the TLS private key. 4 Optional: Provide the TLS certificate key. Create a credential secret to connect with the PostgreSQL instance: cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF 1 Provide the name of the credential secret. 2 Provide credential data to connect with your PostgreSQL instance. 3 Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode . 4 Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance. Create a Backstage custom resource (CR): cat <<EOF | oc -n <your-namespace> create -f - apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: <crt-secret> 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> 3 # ... 1 Set the value of the enableLocalDb parameter to false to disable creating local PostgreSQL instances. 2 Provide the name of the certificate secret if you have configured a TLS connection. 3 Provide the name of the credential secret that you created. Note The environment variables listed in the Backstage CR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure the Backstage CR accordingly. Apply the Backstage CR to the namespace where you have deployed the RHDH instance. 2.2. Configuring an external PostgreSQL instance using the Helm Chart You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database. Prerequisites You are using a supported version of PostgreSQL. For more information, see the Product life cycle page . You have the following details: db-host : Denotes your PostgreSQL instance Domain Name System (DNS) or IP address db-port : Denotes your PostgreSQL instance port number, such as 5432 username : Denotes the user name to connect to your PostgreSQL instance password : Denotes the password to connect to your PostgreSQL instance You have installed the RHDH application by using the Helm Chart. Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation. Note By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance. Procedure Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection: cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF 1 Provide the name of the certificate secret. 2 Provide the CA certificate key. 3 Optional: Provide the TLS private key. 4 Optional: Provide the TLS certificate key. Create a credential secret to connect with the PostgreSQL instance: cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF 1 Provide the name of the credential secret. 2 Provide credential data to connect with your PostgreSQL instance. 3 Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode . 4 Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance. Configure your PostgreSQL instance in the Helm configuration file named values.yaml : # ... upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: <cred-secret> # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: USD{POSTGRES_HOST} port: USD{POSTGRES_PORT} user: USD{POSTGRES_USER} password: USD{POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: USDfile: /opt/app-root/src/postgres-ca.pem key: USDfile: /opt/app-root/src/postgres-key.key cert: USDfile: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - <cred-secret> # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" USD }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: dynamic-plugins-npmrc - name: postgres-crt secret: secretName: <crt-secret> 7 # ... 1 Set the value of the upstream.postgresql.enabled parameter to false to disable creating local PostgreSQL instances. 2 Provide the name of the credential secret. 3 Provide the name of the credential secret. 4 Optional: Provide the name of the TLS certificate only for a TLS connection. 5 Optional: Provide the name of the CA certificate only for a TLS connection. 6 Optional: Provide the name of the TLS private key only if your TLS connection requires a private key. 7 Provide the name of the certificate secret if you have configured a TLS connection. Apply the configuration changes in your Helm configuration file named values.yaml : helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.2.6 2.3. Migrating local databases to an external database server using the Operator By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump with psql or pgAdmin . Note The following procedure uses a database copy script to do a quick migration. Prerequisites You have installed the pg_dump and psql utilities on your local machine. For data export, you have the PGSQL user privileges to make a full dump of local databases. For data import, you have the PGSQL admin privileges to create an external database and populate it with database dumps. Procedure Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal: oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port> Where: The <pgsql-pod-name> variable denotes the name of a PostgreSQL pod with the format backstage-psql-<deployment-name>-<_index> . The <forward-to-port> variable denotes the port of your choice to forward PostgreSQL data to. The <forward-from-port> variable denotes the local PostgreSQL instance port, such as 5432 . Example: Configuring port forwarding oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432 Make a copy of the following db_copy.sh script and edit the details based on your configuration: #!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") 7 for db in USD{!allDB[@]}; do db=USD{allDB[USDdb]} echo Copying database: USDdb PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -c "create database USDdb;" pg_dump -h USDfrom_host -p USDfrom_port -U USDfrom_user -d USDdb | PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -d USDdb done 1 The destination host name, for example, <db-instance-name>.rds.amazonaws.com . 2 The destination port, such as 5432 . 3 The destination server username, for example, postgres . 4 The source host name, such as 127.0.0.1 . 5 The source port number, such as the <forward-to-port> variable. 6 The source server username, for example, postgres . 7 The name of databases to import in double quotes separated by spaces, for example, ("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") . Create a destination database for copying the data: /bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1 1 The <destination-db-password> variable denotes the password to connect to the destination database. Note You can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website. Reconfigure your Backstage custom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator . Check that the following code is present at the end of your Backstage CR after reconfiguration: # ... spec: database: enableLocalDb: false application: # ... extraFiles: secrets: - name: <crt-secret> key: postgres-crt.pem # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> # ... Note Reconfiguring the Backstage CR deletes the corresponding StatefulSet and Pod objects, but does not delete the PersistenceVolumeClaim object. Use the following command to delete the local PersistenceVolumeClaim object: oc -n developer-hub delete pvc <local-psql-pvc-name> where, the <local-psql-pvc-name> variable is in the data-<psql-pod-name> format. Apply the configuration changes. Verification Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command: oc get pods -n <your-namespace> Check the output for the following details: The backstage-developer-hub-xxx pod is in running state. The backstage-psql-developer-hub-0 pod is not available. You can also verify these details using the Topology view in the OpenShift Container Platform web console.
[ "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: \"<db-port>\" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: <crt-secret> 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> 3 #", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: \"<db-port>\" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF", "upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: <cred-secret> # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: USD{POSTGRES_HOST} port: USD{POSTGRES_PORT} user: USD{POSTGRES_USER} password: USD{POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: USDfile: /opt/app-root/src/postgres-ca.pem key: USDfile: /opt/app-root/src/postgres-key.key cert: USDfile: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - <cred-secret> # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include \"janus-idp.backend-secret-name\" USD }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: dynamic-plugins-npmrc - name: postgres-crt secret: secretName: <crt-secret> 7 #", "helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.2.6", "port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>", "port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432", "#!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=(\"backstage_plugin_app\" \"backstage_plugin_auth\" \"backstage_plugin_catalog\" \"backstage_plugin_permission\" \"backstage_plugin_scaffolder\" \"backstage_plugin_search\") 7 for db in USD{!allDB[@]}; do db=USD{allDB[USDdb]} echo Copying database: USDdb PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -c \"create database USDdb;\" pg_dump -h USDfrom_host -p USDfrom_port -U USDfrom_user -d USDdb | PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -d USDdb done", "/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1", "spec: database: enableLocalDb: false application: # extraFiles: secrets: - name: <crt-secret> key: postgres-crt.pem # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret>", "-n developer-hub delete pvc <local-psql-pvc-name>", "get pods -n <your-namespace>" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/assembly-configuring-external-postgresql-databases
Chapter 1. Getting started with Red Hat Quay configuration
Chapter 1. Getting started with Red Hat Quay configuration Red Hat Quay can be deployed by an independent, standalone configuration, or by using the Red Hat Quay Operator on OpenShift Container Platform. How you create, retrieve, update, and validate the Red Hat Quay configuration varies depending on the type of deployment you are using. However, the core configuration options are the same for either deployment type. Core configuration is primarily set through a config.yaml file, but can also be set by using the configuration API. For standalone deployments of Red Hat Quay, you must supply the minimum required configuration parameters before the registry can be started. The minimum requirements to start a Red Hat Quay registry can be found in the "Retrieving the current configuration" section. If you install Red Hat Quay on OpenShift Container Platform using the Red Hat Quay Operator, you do not need to supply configuration parameters because the Red Hat Quay Operator supplies default information to deploy the registry. After you have deployed Red Hat Quay with the desired configuration, you should retrieve, and save, the full configuration from your deployment. The full configuration contains additional generated values that you might need when restarting or upgrading your system.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/configure_red_hat_quay/config-intro
Chapter 9. Troubleshooting hosted control planes
Chapter 9. Troubleshooting hosted control planes If you encounter issues with hosted control planes, see the following information to guide you through troubleshooting. 9.1. Gathering information to troubleshoot hosted control planes When you need to troubleshoot an issue with hosted control plane clusters, you can gather information by running the must-gather command. The command generates output for the management cluster and the hosted cluster. The output for the management cluster contains the following content: Cluster-scoped resources: These resources are node definitions of the management cluster. The hypershift-dump compressed file: This file is useful if you need to share the content with other people. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Network logs: These logs include the OVN northbound and southbound databases and the status for each one. Hosted clusters: This level of output involves all of the resources inside of the hosted cluster. The output for the hosted cluster contains the following content: Cluster-scoped resources: These resources include all of the cluster-wide objects, such as nodes and CRDs. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Although the output does not contain any secret objects from the cluster, it can contain references to the names of secrets. Prerequisites You must have cluster-admin access to the management cluster. You need the name value for the HostedCluster resource and the namespace where the CR is deployed. You must have the hcp command line interface installed. For more information, see Installing the hosted control planes command line interface . You must have the OpenShift CLI ( oc ) installed. You must ensure that the kubeconfig file is loaded and is pointing to the management cluster. Procedure To gather the output for troubleshooting, enter the following command: USD oc adm must-gather --image=registry.redhat.io/multicluster-engine/must-gather-rhel9:v<mce_version> \ /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME \ --dest-dir=NAME ; tar -cvzf NAME.tgz NAME where: You replace <mce_version> with the version of multicluster engine Operator that you are using; for example, 2.6 . The hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE parameter is optional. If you do not include it, the command runs as though the hosted cluster is in the default namespace, which is clusters . The --dest-dir=NAME parameter is optional. Specify that parameter if you want to save the results of the command to a compressed file, replacing NAME with the name of the directory where you want to save the results. 9.2. Restarting hosted control plane components If you are an administrator for hosted control planes, you can use the hypershift.openshift.io/restart-date annotation to restart all control plane components for a particular HostedCluster resource. For example, you might need to restart control plane components for certificate rotation. Procedure To restart a control plane, annotate the HostedCluster resource by entering the following command: USD oc annotate hostedcluster \ -n <hosted_cluster_namespace> \ <hosted_cluster_name> \ hypershift.openshift.io/restart-date=USD(date --iso-8601=seconds) 1 1 The control plane is restarted whenever the value of the annotation changes. The date command serves as the source of a unique string. The annotation is treated as a string, not a timestamp. Verification After you restart a control plane, the following hosted control planes components are typically restarted: Note You might see some additional components restarting as a side effect of changes implemented by the other components. catalog-operator certified-operators-catalog cluster-api cluster-autoscaler cluster-policy-controller cluster-version-operator community-operators-catalog control-plane-operator hosted-cluster-config-operator ignition-server ingress-operator konnectivity-agent konnectivity-server kube-apiserver kube-controller-manager kube-scheduler machine-approver oauth-openshift olm-operator openshift-apiserver openshift-controller-manager openshift-oauth-apiserver packageserver redhat-marketplace-catalog redhat-operators-catalog 9.3. Pausing the reconciliation of a hosted cluster and hosted control plane If you are a cluster instance administrator, you can pause the reconciliation of a hosted cluster and hosted control plane. You might want to pause reconciliation when you back up and restore an etcd database or when you need to debug problems with a hosted cluster or hosted control plane. Procedure To pause reconciliation for a hosted cluster and hosted control plane, populate the pausedUntil field of the HostedCluster resource. To pause the reconciliation until a specific time, enter the following command: USD oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"<timestamp>"}}' --type=merge 1 1 Specify a timestamp in the RFC339 format, for example, 2024-03-03T03:28:48Z . The reconciliation is paused until the specified time is passed. To pause the reconciliation indefinitely, enter the following command: USD oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"true"}}' --type=merge The reconciliation is paused until you remove the field from the HostedCluster resource. When the pause reconciliation field is populated for the HostedCluster resource, the field is automatically added to the associated HostedControlPlane resource. To remove the pausedUntil field, enter the following patch command: USD oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":null}}' --type=merge 9.4. Scaling down the data plane to zero If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero. Note Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down. Procedure Set the kubeconfig file to access the hosted cluster by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig Get the name of the NodePool resource associated to your hosted cluster by running the following command: USD oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE> Optional: To prevent the pods from draining, add the nodeDrainTimeout field in the NodePool resource by running the following command: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> Example output apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: arch: amd64 clusterName: clustername 1 management: autoRepair: false replace: rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: RollingUpdate upgradeType: Replace nodeDrainTimeout: 0s 2 # ... 1 Defines the name of your hosted cluster. 2 Specifies the total amount of time that the controller spends to drain a node. By default, the nodeDrainTimeout: 0s setting blocks the node draining process. Note To allow the node draining process to continue for a certain period of time, you can set the value of the nodeDrainTimeout field accordingly, for example, nodeDrainTimeout: 1m . Scale down the NodePool resource associated to your hosted cluster by running the following command: USD oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0 Note After scaling down the data plan to zero, some pods in the control plane stay in the Pending status and the hosted control plane stays up and running. If necessary, you can scale up the NodePool resource. Optional: Scale up the NodePool resource associated to your hosted cluster by running the following command: USD oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1 After rescaling the NodePool resource, wait for couple of minutes for the NodePool resource to become available in a Ready state. Verification Verify that the value for the nodeDrainTimeout field is greater than 0s by running the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}' Additional resources Must-gather for a hosted cluster
[ "oc adm must-gather --image=registry.redhat.io/multicluster-engine/must-gather-rhel9:v<mce_version> /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=NAME ; tar -cvzf NAME.tgz NAME", "oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> hypershift.openshift.io/restart-date=USD(date --iso-8601=seconds) 1", "oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"<timestamp>\"}}' --type=merge 1", "oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge", "oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":null}}' --type=merge", "export KUBECONFIG=<install_directory>/auth/kubeconfig", "oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE>", "oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: arch: amd64 clusterName: clustername 1 management: autoRepair: false replace: rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: RollingUpdate upgradeType: Replace nodeDrainTimeout: 0s 2", "oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0", "oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hosted_control_planes/troubleshooting-hosted-control-planes
Chapter 6. Securing CodeReady Workspaces
Chapter 6. Securing CodeReady Workspaces This section describes all aspects of user authentication, types of authentication, and permissions models on the CodeReady Workspaces server and its workspaces. Authenticating users Authorizing users 6.1. Authenticating users This document covers all aspects of user authentication in Red Hat CodeReady Workspaces, both on the CodeReady Workspaces server and in workspaces. This includes securing all REST API endpoints, WebSocket or JSON RPC connections, and some web resources. All authentication types use the JWT open standard as a container for transferring user identity information. In addition, CodeReady Workspaces server authentication is based on the OpenID Connect protocol implementation, which is provided by default by Keycloak . Authentication in workspaces implies the issuance of self-signed per-workspace JWT tokens and their verification on a dedicated service based on JWTProxy . 6.1.1. Authenticating to the CodeReady Workspaces server 6.1.1.1. Authenticating to the CodeReady Workspaces server using OpenID OpenID authentication on the CodeReady Workspaces server implies the presence of an external OpenID Connect provider and has the following main steps: Authenticate the user through a JWT token that is retrieved from an HTTP request or, in case of a missing or invalid token, redirect the user to the RH-SSO login page. Send authentication tokens in an Authorization header. In limited cases, when it is impossible to use the Authorization header, the token can be sent in the token query parameter. Example: OAuth authentication initialization. Compose an internal subject object that represents the current user inside the CodeReady Workspaces server code. Note The only supported and tested OpenID provider is RH-SSO. Procedure To authenticate to the CodeReady Workspaces server using OpenID authentication: Request the OpenID settings service where clients can find all the necessary URLs and properties of the OpenId provider, such as jwks.endpoint , token.endpoint , logout.endpoint , realm.name , or client_id returned in the JSON format. The service URL is https://codeready-<openshift_deployment_name>.<domain_name>/api/keycloak/settings , and it is only available in the CodeReady Workspaces multiuser mode. The presence of the service in the URL confirms that the authentication is enabled in the current deployment. Example output: { "che.keycloak.token.endpoint": "http://172.19.20.9:5050/auth/realms/che/protocol/openid-connect/token", "che.keycloak.profile.endpoint": "http://172.19.20.9:5050/auth/realms/che/account", "che.keycloak.client_id": "che-public", "che.keycloak.auth_server_url": "http://172.19.20.9:5050/auth", "che.keycloak.password.endpoint": "http://172.19.20.9:5050/auth/realms/che/account/password", "che.keycloak.logout.endpoint": "http://172.19.20.9:5050/auth/realms/che/protocol/openid-connect/logout", "che.keycloak.realm": "che" } The service allows downloading the JavaScript client library to interact with the provider using the https://codeready-<openshift_deployment_name>.<domain_name>/api/keycloak/OIDCKeycloak.js URL. Redirect the user to the appropriate provider's login page with all the necessary parameters, including client_id and the return redirection path. This can be done with any client library (JS or Java). When the user is logged in to the provider, the client side-code is obtained, and the JWT token has validated the token, the creation of the subject begins. The verification of the token signature occurs in two main steps: Authentication: The token is extracted from the Authorization header or from the token query parameter and is parsed using the public key retrieved from the provider. In case of expired, invalid, or malformed tokens, a 403 error is sent to the user. The minimal use of the query parameter is recommended, due to its support limitations or complete removal in upcoming versions. If the validation is successful, the parsed form of the token is passed to the environment initialization step: Environment initialization: The filter extracts data from the JWT token claims, creates the user in the local database if it is not yet present, and constructs the subject object and sets it into the per-request EnvironmentContext object, which is statically accessible everywhere. If the request was made using only a machine token, the following single authentication filter is used: org.eclipse.che.multiuser.machine.authentication.server.MachineLoginFilter : The filter finds the user that the userId token belongs to, retrieves the user instance, and sets the principal to the session. The CodeReady Workspaces server-to-server requests are performed using a dedicated request factory that signs every request with the current subject token obtained from the EnvironmentContext object. Note Providing user-specific data Since RH-SSO may store user-specific information (first and last name, phone number, job title), there is a special implementation of the ProfileDao that can provide this data to consumers. The implementation is read-only, so users cannot perform create and update operations. 6.1.1.1.1. Obtaining the token from credentials through RH-SSO Clients that cannot run JavaScript or other clients (such as command-line clients or Selenium tests) must request the authorization token directly from RH-SSO. To obtain the token, send a request to the token endpoint with the username and password credentials. This request can be schematically described as the following cURL request: The CodeReady Workspaces dashboard uses a customized RH-SSO login page and an authentication mechanism based on grant_type=authorization_code . It is a two-step authentication process: Logging in and obtaining the authorization code. Obtaining the token using this authorization code. 6.1.1.1.2. Obtaining the token from the OpenShift token through RH-SSO When CodeReady Workspaces is installed on OpenShift using the Operator, and the OpenShift OAuth integration is enabled, as it is by default, the user's CodeReady Workspaces authentication token can be retrieved from the user's OpenShift token. To retrieve the authentication token from the OpenShift token, send a schematically described cURL request to the OpenShift token endpoint: The default values for <openshift_identity_provider_name> are: On OpenShift 3.11: openshift-v3 On OpenShift 4.x: openshift-v4 <user_openshift_token> is the token retrieved by the end-user with the command: Warning Before using this token exchange feature, it is required for an end user to be interactively logged in at least once to the CodeReady Workspaces Dashboard using the OpenShift login page. This step is needed to link the OpenShift and RH-SSO user accounts properly and set the required user profile information. 6.1.1.2. Authenticating to the CodeReady Workspaces server using other authentication implementations This procedure describes how to use an OpenID Connect (OIDC) authentication implementation other than RH-SSO. Procedure Update the authentication configuration parameters that are stored in the multiuser.properties file (such as client ID, authentication URL, realm name). Write a single filter or a chain of filters to validate tokens, create the user in the CodeReady Workspaces dashboard, and compose the subject object. If the new authorization provider supports the OpenID protocol, use the OIDC JS client library available at the settings endpoint because it is decoupled from specific implementations. If the selected provider stores additional data about the user (first and last name, job title), it is recommended to write a provider-specific ProfileDao implementation that provides this information. 6.1.1.3. Authenticating to the CodeReady Workspaces server using OAuth For easy user interaction with third-party services, the CodeReady Workspaces server supports OAuth authentication. OAuth tokens are also used for GitHub-related plug-ins. OAuth authentication has two main flows based on the RH-SSO brokering mechanism. The following are the two main OAuth API implementations: internal org.eclipse.che.security.oauth.EmbeddedOAuthAPI external org.eclipse.che.multiuser.keycloak.server.oauth2.DelegatedOAuthAPI To switch between the two implementations, use the che.oauth.service_mode= <embedded|delegated> configuration property. The main REST endpoint in the OAuth API is org.eclipse.che.security.oauth.OAuthAuthenticationService , which contains: An authentication method that the OAuth authentication flow can start with. A callback method to process callbacks from the provider. A token to retrieve the current user's OAuth token. These methods apply to the currently activated, embedded or delegated, OAuthAPI. The OAuthAPI then provides the following underlying operations: Finding the appropriate authenticator. Initializing the login process. Forwarding the user. 6.1.1.4. Using Swagger or REST clients to execute queries The user's RH-SSO token is used to execute queries to the secured API on the user's behalf through REST clients. A valid token must be attached as the Request header or the ?token=USDtoken query parameter. Access the CodeReady Workspaces Swagger interface at https://codeready-<openshift_deployment_name>.<domain_name>/swagger . The user must be signed in through RH-SSO, so that the access token is included in the Request header. 6.1.2. Authenticating in a CodeReady Workspaces workspace Workspace containers may contain services that must be protected with authentication. Such protected services are called secure . To secure these services, use a machine authentication mechanism. Machine tokens avoid the need to pass RH-SSO tokens to workspace containers (which can be insecure). Also, RH-SSO tokens may have a relatively shorter lifetime and require periodic renewals or refreshes, which is difficult to manage and keep in sync with the same user session tokens on clients. Figure 6.1. Authentication inside a workspace 6.1.2.1. Creating secure servers To create secure servers in CodeReady Workspaces workspaces, set the secure attribute of the endpoint to true in the dockerimage type component in the devfile. Devfile snippet for a secure server components: - type: dockerimage endpoints: - attributes: secure: 'true' 6.1.2.2. Workspace JWT token Workspace tokens are JSON web tokens ( JWT ) that contain the following information in their claims: uid : The ID of the user who owns this token uname : The name of the user who owns this token wsid : The ID of a workspace which can be queried with this token Every user is provided with a unique personal token for each workspace. The structure of a token and the signature are different than they are in RH-SSO. The following is an example token view: # Header { "alg": "RS512", "kind": "machine_token" } # Payload { "wsid": "workspacekrh99xjenek3h571", "uid": "b07e3a58-ed50-4a6e-be17-fcf49ff8b242", "uname": "john", "jti": "06c73349-2242-45f8-a94c-722e081bb6fd" } # Signature { "value": "RSASHA256(base64UrlEncode(header) + . + base64UrlEncode(payload))" } The SHA-256 cipher with the RSA algorithm is used for signing machine tokens. It is not configurable. Also, there is no public service that distributes the public part of the key pair with which the token is signed. 6.1.2.3. Machine token validation The validation of machine tokens is performed using a dedicated per-workspace service with JWTProxy running on it in a separate Pod. When the workspace starts, this service receives the public part of the SHA key from the CodeReady Workspaces server. A separate verification endpoint is created for each secure server. When traffic comes to that endpoint, JWTProxy tries to extract the token from the cookies or headers and validates it using the public-key part. To query the CodeReady Workspaces server, a workspace server can use the machine token provided in the CHE_MACHINE_TOKEN environment variable. This token is the user's who starts the workspace. The scope of such requests is restricted to the current workspace only. The list of allowed operations is also strictly limited. 6.2. Authorizing users User authorization in CodeReady Workspaces is based on the permissions model. Permissions are used to control the allowed actions of users and establish a security model. Every request is verified for the presence of the required permission in the current user subject after it passes authentication. You can control resources managed by CodeReady Workspaces and allow certain actions by assigning permissions to users. Permissions can be applied to the following entities: Workspace Organization System All permissions can be managed using the provided REST API. The APIs are documented using Swagger at https://codeready-<openshift_deployment_name>.<domain_name>/swagger/#!/permissions . 6.2.1. CodeReady Workspaces workspace permissions The user who creates a workspace is the workspace owner. By default, the workspace owner has the following permissions: read , use , run , configure , setPermissions , and delete . Workspace owners can invite users into the workspace and control workspace permissions for other users. The following permissions are associated with workspaces: Table 6.1. CodeReady Workspaces workspace permissions Permission Description read Allows reading the workspace configuration. use Allows using a workspace and interacting with it. run Allows starting and stopping a workspace. configure Allows defining and changing the workspace configuration. setPermissions Allows updating the workspace permissions for other users. delete Allows deleting the workspace. 6.2.2. CodeReady Workspaces organization permissions An CodeReady Workspaces organization is a named set of users. The following permissions are applicable to organizations: Table 6.2. CodeReady Workspaces organization permissions Permission Description update Allows editing of the organization settings and information. delete Allows deleting an organization. manageSuborganizations Allows creating and managing sub-organizations. manageResources Allows redistribution of an organization's resources and defining the resource limits. manageWorkspaces Allows creating and managing all the organization's workspaces. setPermissions Allows adding and removing users and updating their permissions. 6.2.3. CodeReady Workspaces system permissions CodeReady Workspaces system permissions control aspects of the whole CodeReady Workspaces installation. The following permissions are applicable to the system: Table 6.3. CodeReady Workspaces system permission Permission Description manageSystem Allows control of the system, workspaces, and organizations. setPermissions Allows updating the permissions for users on the system. manageUsers Allows creating and managing users. monitorSystem Allows accessing endpoints used for monitoring the state of the server. All system permissions are granted to the administrative user who is configured in the CHE_SYSTEM_ADMIN__NAME property (the default is admin ). The system permissions are granted when the CodeReady Workspaces server starts. If the user is not present in the CodeReady Workspaces user database, it happens after the first user's login. 6.2.4. manageSystem permission Users with the manageSystem permission have access to the following services: Path HTTP Method Description /resource/free/ GET Get free resource limits. /resource/free/{accountId} GET Get free resource limits for the given account. /resource/free/{accountId} POST Edit free resource limit for the given account. /resource/free/{accountId} DELETE Remove free resource limit for the given account. /installer/ POST Add installer to the registry. /installer/{key} PUT Update installer in the registry. /installer/{key} DELETE Remove installer from the registry. /logger/ GET Get logging configurations in the CodeReady Workspaces server. /logger/{name} GET Get configurations of logger by its name in the CodeReady Workspaces server. /logger/{name} PUT Create logger in the CodeReady Workspaces server. /logger/{name} POST Edit logger in the CodeReady Workspaces server. /resource/{accountId}/details GET Get detailed information about resources for the given account. /system/stop POST Shutdown all system services, prepare CodeReady Workspaces to stop. 6.2.5. monitorSystem permission Users with the monitorSystem permission have access to the following services. Path HTTP Method Description /activity GET Get workspaces in a certain state for a certain amount of time. 6.2.6. Listing CodeReady Workspaces permissions To list CodeReady Workspaces permissions that apply to a specific resource , perform the GET /permissions request. To list the permissions that apply to a user , perform the GET /permissions/{domain} request. To list the permissions that apply to all users , perform the GET /permissions/{domain}/all request. The user must have manageSystem permissions to see this information. The suitable domain values are: system organization workspace Note The domain is optional. If no domain is specified, the API returns all possible permissions for all the domains. 6.2.7. Assigning CodeReady Workspaces permissions To assign permissions to a resource, perform the POST /permissions request. The suitable domain values are: system organization workspace The following is a message body that requests permissions for a user with a userId to a workspace with a workspaceID : Requesting CodeReady Workspaces user permissions { "actions": [ "read", "use", "run", "configure", "setPermissions" ], "userId": "userID", 1 "domainId": "workspace", "instanceId": "workspaceID" 2 } 1 The userId parameter is the ID of the user that has been granted certain permissions. 2 The instanceId parameter is the ID of the resource that retrieves the permission for all users. 6.2.8. Sharing CodeReady Workspaces permissions A user with setPermissions privileges can share a workspace and grant read , use , run , configure , or setPermissions privileges for other users. Procedure To share workspace permissions: Select a workspace in the user dashboard. Navigate to the Share tab and enter the email IDs of the users. Use commas or spaces as separators for multiple emails.
[ "{ \"che.keycloak.token.endpoint\": \"http://172.19.20.9:5050/auth/realms/che/protocol/openid-connect/token\", \"che.keycloak.profile.endpoint\": \"http://172.19.20.9:5050/auth/realms/che/account\", \"che.keycloak.client_id\": \"che-public\", \"che.keycloak.auth_server_url\": \"http://172.19.20.9:5050/auth\", \"che.keycloak.password.endpoint\": \"http://172.19.20.9:5050/auth/realms/che/account/password\", \"che.keycloak.logout.endpoint\": \"http://172.19.20.9:5050/auth/realms/che/protocol/openid-connect/logout\", \"che.keycloak.realm\": \"che\" }", "curl --data \"grant_type=password&client_id= <client_name> &username= <username> &password= <password> \" http://<keyckloak_host>:5050/auth/realms/ <realm_name> /protocol/openid-connect/token", "curl -X POST -d \"client_id= <client_name> \" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token= <user_openshift_token> \" -d \"subject_issuer= <openshift_identity_provider_name> \" --data-urlencode \"subject_token_type=urn:ietf:params:oauth:token-type:access_token\" http://<keyckloak_host>:5050/auth/realms/ <realm_name> /protocol/openid-connect/token", "oc whoami --show-token", "components: - type: dockerimage endpoints: - attributes: secure: 'true'", "Header { \"alg\": \"RS512\", \"kind\": \"machine_token\" } Payload { \"wsid\": \"workspacekrh99xjenek3h571\", \"uid\": \"b07e3a58-ed50-4a6e-be17-fcf49ff8b242\", \"uname\": \"john\", \"jti\": \"06c73349-2242-45f8-a94c-722e081bb6fd\" } Signature { \"value\": \"RSASHA256(base64UrlEncode(header) + . + base64UrlEncode(payload))\" }", "{ \"actions\": [ \"read\", \"use\", \"run\", \"configure\", \"setPermissions\" ], \"userId\": \"userID\", 1 \"domainId\": \"workspace\", \"instanceId\": \"workspaceID\" 2 }" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/administration_guide/securing-codeready-workspaces_crw
2.4. Restricting Network Connectivity During the Installation Process
2.4. Restricting Network Connectivity During the Installation Process When installing Red Hat Enterprise Linux, the installation medium represents a snapshot of the system at a particular time. Because of this, it may not be up-to-date with the latest security fixes and may be vulnerable to certain issues that were fixed only after the system provided by the installation medium was released. When installing a potentially vulnerable operating system, always limit exposure only to the closest necessary network zone. The safest choice is the "no network" zone, which means to leave your machine disconnected during the installation process. In some cases, a LAN or intranet connection is sufficient while the Internet connection is the riskiest. To follow the best security practices, choose the closest zone with your repository while installing Red Hat Enterprise Linux from a network. For more information about configuring network connectivity, see the Network & Hostname chapter of the Red Hat Enterprise Linux 7 Installation Guide.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-restricting_network_connectivity_during_the_installation_process
Chapter 6. Installation configuration parameters for Nutanix
Chapter 6. Installation configuration parameters for Nutanix Before you deploy an OpenShift Container Platform cluster on Nutanix, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 6.1. Available installation configuration parameters for Nutanix The following tables specify the required, optional, and Nutanix-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 6.1.4. Additional Nutanix configuration parameters Additional Nutanix configuration parameters are described in the following table: Table 6.4. Additional Nutanix cluster parameters Parameter Description Values The name of a prism category key to apply to compute VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String The value of a prism category key-value pair to apply to compute VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String The type of identifier you use to select a project for compute VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid The name or UUID of a project with which compute VMs are associated. This parameter must be accompanied by the type parameter. String The boot type that the compute machines use. You must use the Legacy boot type in OpenShift Container Platform 4.14. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . The name of a prism category key to apply to control plane VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String The value of a prism category key-value pair to apply to control plane VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String The type of identifier you use to select a project for control plane VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid The name or UUID of a project with which control plane VMs are associated. This parameter must be accompanied by the type parameter. String The name of a prism category key to apply to all VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String The value of a prism category key-value pair to apply to all VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String The type of identifier you use to select a project for all VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid . The name or UUID of a project with which all VMs are associated. This parameter must be accompanied by the type parameter. String The boot type for all machines. You must use the Legacy boot type in OpenShift Container Platform 4.14. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . The virtual IP (VIP) address that you configured for control plane API access. IP address The virtual IP (VIP) address that you configured for cluster ingress. IP address The Prism Central domain name or IP address. String The port that is used to log into Prism Central. String The password for the Prism Central user name. String The user name that is used to log into Prism Central. String The Prism Element domain name or IP address. [ 1 ] String The port that is used to log into Prism Element. String The universally unique identifier (UUID) for Prism Element. String The UUID of the Prism Element network that contains the virtual IP addresses and DNS records that you configured. [ 2 ] String Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 The prismElements section holds a list of Prism Elements (clusters). A Prism Element encompasses all of the Nutanix resources, for example virtual machines and subnets, that are used to host the OpenShift Container Platform cluster. Only a single Prism Element is supported. Only one subnet per OpenShift Container Platform cluster is supported.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "compute: platform: nutanix: categories: key:", "compute: platform: nutanix: categories: value:", "compute: platform: nutanix: project: type:", "compute: platform: nutanix: project: name: or uuid:", "compute: platform: nutanix: bootType:", "controlPlane: platform: nutanix: categories: key:", "controlPlane: platform: nutanix: categories: value:", "controlPlane: platform: nutanix: project: type:", "controlPlane: platform: nutanix: project: name: or uuid:", "platform: nutanix: defaultMachinePlatform: categories: key:", "platform: nutanix: defaultMachinePlatform: categories: value:", "platform: nutanix: defaultMachinePlatform: project: type:", "platform: nutanix: defaultMachinePlatform: project: name: or uuid:", "platform: nutanix: defaultMachinePlatform: bootType:", "platform: nutanix: apiVIP:", "platform: nutanix: ingressVIP:", "platform: nutanix: prismCentral: endpoint: address:", "platform: nutanix: prismCentral: endpoint: port:", "platform: nutanix: prismCentral: password:", "platform: nutanix: prismCentral: username:", "platform: nutanix: prismElements: endpoint: address:", "platform: nutanix: prismElements: endpoint: port:", "platform: nutanix: prismElements: uuid:", "platform: nutanix: subnetUUIDs:", "platform: nutanix: clusterOSImage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_nutanix/installation-config-parameters-nutanix
Chapter 63. Atomic Host and Containers
Chapter 63. Atomic Host and Containers SELinux prevents Docker from running a container Due to a missing label for the /usr/bin/docker-current binary file, Docker is prevented from running a container by SELinux. (BZ# 1358819 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/known_issues_atomic_host_and_containers
Chapter 110. AclRuleClusterResource schema reference
Chapter 110. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Description type Must be cluster . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-AclRuleClusterResource-reference
1.3. Management CLI
1.3. Management CLI 1.3.1. Launch the Management CLI Procedure 1.1. Launch CLI in Linux or Microsoft Windows Server Launch the CLI in Linux Run the EAP_HOME /bin/jboss-cli.sh file by entering the following at a command line: Launch the CLI in Microsoft Windows Server Run the EAP_HOME \bin\jboss-cli.bat file by double-clicking it, or by entering the following at a command line: 1.3.2. Quit the Management CLI From the Management CLI, enter the quit command: 1.3.3. Connect to a Managed Server Instance Using the Management CLI Prerequisites Section 1.3.1, "Launch the Management CLI" Procedure 1.2. Connect to a Managed Server Instance Run the connect command From the Management CLI, enter the connect command: Alternatively, to connect to a managed server when starting the Management CLI on a Linux system, use the --connect parameter: The --connect parameter can be used to specify the host and port of the server. To connect to the address 192.168.0.1 with the port value 9999 the following would apply: 1.3.4. Obtain Help with the Management CLI Summary Sometimes you might need guidance if you need to learn a CLI command or feel unsure about what to do. The Management CLI features a help dialog with general and context-sensitive options. (Note that the help commands dependent on the operation context require an established connection to either a standalone or domain controller. These commands will not appear in the listing unless the connection has been established.) Prerequisites Section 1.3.1, "Launch the Management CLI" For general help From the Management CLI, enter the help command: Obtain context-sensitive help From the Management CLI, enter the help -commands extended command: For a more detailed description of a specific command, enter the command, followed by --help . Result The CLI help information is displayed.
[ "EAP_HOME /bin/jboss-cli.sh", "C:\\> EAP_HOME \\bin\\jboss-cli.bat", "[domain@localhost:9999 /] quit", "[disconnected /] connect Connected to domain controller at localhost:9999", "EAP_HOME /bin/jboss-cli.sh --connect", "EAP_HOME /bin/jboss-cli.sh --connect --controller=192.168.0.1:9999", "[standalone@localhost:9999 /] help", "[standalone@localhost:9999 /] help --commands", "[standalone@localhost:9999 /] deploy --help" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/sect-Management_CLI
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/integrate_openstack_identity_with_external_user_management_services/making-open-source-more-inclusive
Chapter 2. Understand MicroProfile
Chapter 2. Understand MicroProfile 2.1. MicroProfile Config 2.1.1. MicroProfile Config in JBoss EAP Configuration data can change dynamically and applications need to be able to access the latest configuration information without restarting the server. MicroProfile Config provides portable externalization of configuration data. This means, you can configure applications and microservices to run in multiple environments without modification or repackaging. MicroProfile Config functionality is implemented in JBoss EAP using the SmallRye Config component and is provided by the microprofile-config-smallrye subsystem. Note MicroProfile Config is only supported in JBoss EAP XP. It is not supported in JBoss EAP. Important If you are adding your own Config implementations, you need to use the methods in the latest version of the Config interface. Additional Resources MicroProfile Config SmallRye Config Config implementations 2.1.2. MicroProfile Config sources supported in MicroProfile Config MicroProfile Config configuration properties can come from different locations and can be in different formats. These properties are provided by ConfigSources. ConfigSources are implementations of the org.eclipse.microprofile.config.spi.ConfigSource interface. The MicroProfile Config specification provides the following default ConfigSource implementations for retrieving configuration values: System.getProperties() . System.getenv() . All META-INF/microprofile-config.properties files on the class path. The microprofile-config-smallrye subsystem supports additional types of ConfigSource resources for retrieving configuration values. You can also retrieve the configuration values from the following resources: Properties in a microprofile-config-smallrye/config-source management resource Files in a directory ConfigSource class ConfigSourceProvider class Additional Resources org.eclipse.microprofile.config.spi.ConfigSource 2.2. MicroProfile Fault Tolerance 2.2.1. About MicroProfile Fault Tolerance specification The MicroProfile Fault Tolerance specification defines strategies to deal with errors inherent in distributed microservices. The MicroProfile Fault Tolerance specification defines the following strategies to handle errors: Timeout Define the amount of time within which an execution must finish. Defining a timeout prevents waiting for an execution indefinitely. Retry Define the criteria for retrying a failed execution. Fallback Provide an alternative in the case of a failed execution. CircuitBreaker Define the number of failed execution attempts before temporarily stopping. You can define the length of the delay before resuming execution. Bulkhead Isolate failures in part of the system so that the rest of the system can still function. Asynchronous Execute client request in a separate thread. Additional Resources MicroProfile Fault Tolerance specification 2.2.2. MicroProfile Fault Tolerance in JBoss EAP The microprofile-fault-tolerance-smallrye subsystem provides support for MicroProfile Fault Tolerance in JBoss EAP. The subsystem is available only in the JBoss EAP XP stream. The microprofile-fault-tolerance-smallrye subsystem provides the following annotations for interceptor bindings: @Timeout @Retry @Fallback @CircuitBreaker @Bulkhead @Asynchronous You can bind these annotations at the class level or at the method level. An annotation bound to a class applies to all of the business methods of that class. The following rules apply to binding interceptors: If a component class declares or inherits a class-level interceptor binding, the following restrictions apply: The class must not be declared final. The class must not contain any static, private, or final methods. If a non-static, non-private method of a component class declares a method level interceptor binding, neither the method nor the component class may be declared final. Fault tolerance operations have the following restrictions: Fault tolerance interceptor bindings must be applied to a bean class or bean class method. When invoked, the invocation must be the business method invocation as defined in the Jakarta Contexts and Dependency Injection specification. An operation is not considered fault tolerant if both of the following conditions are true: The method itself is not bound to any fault tolerance interceptor. The class containing the method is not bound to any fault tolerance interceptor. The microprofile-fault-tolerance-smallrye subsystem provides the following configuration options, in addition to the configuration options provided by MicroProfile Fault Tolerance: io.smallrye.faulttolerance.globalThreadPoolSize io.smallrye.faulttolerance.timeoutExecutorThreads Additional Resources MicroProfile Fault Tolerance Specification SmallRye Fault Tolerance project 2.3. MicroProfile Health 2.3.1. MicroProfile Health in JBoss EAP JBoss EAP includes the SmallRye Health component, which you can use to determine whether the JBoss EAP instance is responding as expected. This capability is enabled by default. MicroProfile Health is only available when running JBoss EAP as a standalone server. The MicroProfile Health specification defines the following health checks: Readiness Determines whether an application is ready to process requests. The annotation @Readiness provides this health check. Liveness Determines whether an application is running. The annotation @Liveness provides this health check. The @Health annotation was removed in MicroProfile Health 3.0. MicroProfile Health 3.0 has the following breaking changes: Pruning of @Health qualifier Renaming of the HealthCheckResponse state parameter to status to fix deserialization issues. This has also resulted in renaming of the corresponding methods. For more information about the breaking changes in MicroProfile Health 3.0, see Release Notes for MicroProfile Health 3.0 . Important The :empty-readiness-checks-status and the :empty-liveness-checks-status management attributes specify the global status when no readiness or liveness probes are defined. Additional Resources Global status when probes are not defined SmallRye Health MicroProfile Health Custom health check example 2.4. MicroProfile JWT 2.4.1. MicroProfile JWT integration in JBoss EAP The subsystem microprofile-jwt-smallrye provides MicroProfile JWT integration in JBoss EAP. The following functionalities are provided by the microprofile-jwt-smallrye subsystem: Detecting deployments that use MicroProfile JWT security. Activating support for MicroProfile JWT. The subsystem contains no configurable attributes or resources. In addition to the microprofile-jwt-smallrye subsystem, the org.eclipse.microprofile.jwt.auth.api module provides MicroProfile JWT integration in JBoss EAP. Additional Resources SmallRye JWT 2.4.2. Differences between a traditional deployment and an MicroProfile JWT deployment MicroProfile JWT deployments do not depend on managed SecurityDomain resources like traditional JBoss EAP deployments. Instead, a virtual SecurityDomain is created and used across the MicroProfile JWT deployment. As the MicroProfile JWT deployment is configured entirely within the MicroProfile Config properties and the microprofile-jwt-smallrye subsystem, the virtual SecurityDomain does not need any other managed configuration for the deployment. 2.4.3. MicroProfile JWT activation in JBoss EAP MicroProfile JWT is activated for applications based on the presence of an auth-method in the application. The MicroProfile JWT integration is activated for an application in the following way: As part of the deployment process, JBoss EAP scans the application archive for the presence of an auth-method . If an auth-method is present and defined as MP-JWT , the MicroProfile JWT integration is activated. The auth-method can be specified in either or both of the following files: the file containing the class that extends javax.ws.rs.core.Application , annotated with the @LoginConfig the web.xml configuration file If auth-method is defined both in a class, using annotation, and in the web.xml configuration file, the definition in web.xml configuration file is used. 2.4.4. Limitations of MicroProfile JWT in JBoss EAP The MicroProfile JWT implementation in JBoss EAP has certain limitations. The following limitations of MicroProfile JWT implementation exist in JBoss EAP: The MicroProfile JWT implementation parses only the first key from the JSON Web Key Set (JWKS) supplied in the mp.jwt.verify.publickey property. Therefore, if a token claims to be signed by the second key or any key after the second key, the token fails verification and the request containing the token is not authorized. Base64 encoding of JWKS is not supported. In both cases, a clear text JWKS can be referenced instead of using the mp.jwt.verify.publickey.location config property. 2.5. MicroProfile Metrics 2.5.1. MicroProfile Metrics in JBoss EAP JBoss EAP includes the SmallRye Metrics component. The SmallRye Metrics component provides the MicroProfile Metrics functionality using the microprofile-metrics-smallrye subsystem. The microprofile-metrics-smallrye subsystem provides monitoring data for the JBoss EAP instance. The subsystem is enabled by default. Important The microprofile-metrics-smallrye subsystem is only enabled in standalone configurations. Additional Resources SmallRye Metrics MicroProfile Metrics 2.6. MicroProfile OpenAPI 2.6.1. MicroProfile OpenAPI in JBoss EAP MicroProfile OpenAPI is integrated in JBoss EAP using the microprofile-openapi-smallrye subsystem. The MicroProfile OpenAPI specification defines an HTTP endpoint that serves an OpenAPI 3.0 document. The OpenAPI 3.0 document describes the REST services for the host. The OpenAPI endpoint is registered using the configured path, for example http://localhost:8080/openapi , local to the root of the host associated with a deployment. Note Currently, the OpenAPI endpoint for a virtual host can only document a single deployment. To use OpenAPI with multiple deployments registered with different context paths on the same virtual host, each deployment must use a distinct endpoint path. The OpenAPI endpoint returns a YAML document by default. You can also request a JSON document using an Accept HTTP header, or a format query parameter. If the Undertow server or host of a given application defines an HTTPS listener then the OpenAPI document is also available using HTTPS. For example, an endpoint for HTTPS is https://localhost:8443/openapi . 2.7. MicroProfile OpenTracing 2.7.1. MicroProfile OpenTracing The ability to trace requests across service boundaries is important, especially in a microservices environment where a request can flow through multiple services during its life cycle. The MicroProfile OpenTracing specification defines behaviors and an API for accessing an OpenTracing compliant Tracer interface within a CDI-bean application. The Tracer interface automatically traces JAX-RS applications. The behaviors specify how OpenTracing Spans are created automatically for incoming and outgoing requests. The API defines how to explicitly disable or enable tracing for given endpoints. Additional Resources For more information about MicroProfile OpenTracing specification, see MicroProfile OpenTracing documentation. For more information about the Tracer interface, see Tracer javadoc . 2.7.2. MicroProfile OpenTracing in JBoss EAP You can use the microprofile-opentracing-smallrye subsystem to configure the distributed tracing of Jakarta EE applications. This subsystem uses the SmallRye OpenTracing component to provide the MicroProfile OpenTracing functionality for JBoss EAP. MicroProfile OpenTracing 2.0 supports tracing requests for applications. You can configure the default Jaeger Java Client tracer, plus a set of instrumentation libraries for components commonly used in Jakarta EE, using JBoss EAP management API with the management CLI or the management console. Note Each individual WAR deployed to the JBoss EAP server automatically has its own Tracer instance. Each WAR within an EAR is treated as an individual WAR, and each has its own Tracer instance. By default, the service name used with the Jaeger Client is derived from the deployment's name, which is usually the WAR file name. Within the microprofile-opentracing-smallrye subsystem, you can configure the Jaeger Java Client by setting system properties or environment variables. Important Configuring the Jeager Client tracer using system properties and environment variables is provided as a Technology Preview. The system properties and environment variables affiliated with the Jeager Client tracer might change and become incompatible with each other in future releases. Note By default, the probabilistic sampling strategy of the Jaeger Client for Java is set to 0.001 , meaning that only approximately one in one thousand traces are sampled. To sample every request, set the system properties JAEGER_SAMPLER_TYPE to const and JAEGER_SAMPLER_PARAM to 1 . Additional Resources For more information about SmallRye OpenTracing functionality, see the SmallRye OpenTracing component. For more information about the default tracer, see the Jaeger Java Client. For more information about the Tracer interface, see Tracer javadoc . For more information about overriding the default tracer and tracing Jakarta Contexts and Dependency Injection beans, see Using Eclipse MicroProfile OpenTracing to Trace Requests in the Development Guide . For more information about configuring the Jaeger Client, see the Jaeger documentation. For more information about valid system properties, see Configuration via Environment in the Jaeger documentation. 2.8. MicroProfile REST Client 2.8.1. MicroProfile REST client JBoss EAP XP 3.0.0 supports the MicroProfile REST client 2.0 that builds on Jakarta RESTful Web Services 2.1.6 client APIs to provide a type-safe approach to invoke RESTful services over HTTP. The MicroProfile Type Safe REST clients are defined as Java interfaces. With the MicroProfile REST clients, you can write client applications with executable code. Use the MicroProfile REST client to avail the following capabilities: An intuitive syntax Programmatic registration of providers Declarative registration of providers Declarative specification of headers Propagation of headers on the server ResponseExceptionMapper Jakarta Contexts and Dependency Injection integration Access to server-sent events (SSE) Additional resources A comparison between MicroProfile REST client and Jakarta RESTful Web Services syntaxes Programmatic registration of providers in MicroProfile REST client Declarative registration of providers in MicroProfile REST client Declarative specification of headers in MicroProfile REST client Propagation of headers on the server in MicroProfile REST client ResponseExceptionMapper in MicroProfile REST client Context dependency injection with MicroProfile REST client
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/understand_microprofile
Chapter 1. Streams for Apache Kafka Proxy overview
Chapter 1. Streams for Apache Kafka Proxy overview Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware ("Layer 7") proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself. Built-in filters are provided as part of the solution. Functioning as an intermediary, the Streams for Apache Kafka Proxy mediates communication between a Kafka cluster and its clients. It takes on the responsibility of receiving, filtering, and forwarding messages. An API provides a convenient means for implementing custom logic within the proxy. Additional resources Apache Kafka website
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_proxy/assembly-overview-str
7.51. evolution-data-server
7.51. evolution-data-server 7.51.1. RHBA-2015:1264 - evolution-data-server bug fix update Updated evolution-data-server packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The evolution-data-server packages provide a unified back end for applications which interact with contacts, tasks and calendar information. Evolution Data Server was originally developed as a back end for the Evolution information management application, but is now used by various other applications. Bug Fixes BZ# 1163375 The Evolution client could not connect to a mail server using the Secure Sockets Layer (SSL) protocol when the server had SSL disabled due to the POODLE vulnerability. With this update, the Evolution Data Server has been modified to also connect using the Transport Layer Security (TLSv1) protocol, thus fixing this bug. BZ# 1141760 Previously, the e-calendar-factory process did not terminate automatically when the user logged out of the graphical desktop environment, and e-calendar-factory thus redundantly consumed system resources. This update fixes the underlying code, which prevents this problem from occurring. Users of evolution-data-server are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-evolution-data-server
Chapter 4. Introducing metrics
Chapter 4. Introducing metrics If you want to introduce metrics to your Streams for Apache Kafka Proxy deployment, you can configure an insecure HTTP and Prometheus endpoint (at /metrics ). Add the following to the ConfigMap resource that defines the Streams for Apache Kafka Proxy configuration: Minimal metrics configuration adminHttp: endpoints: prometheus: {} By default, the HTTP endpoint listens on 0.0.0.0:9190 . You can change the hostname and port as follows: Example metrics configuration with hostname and port adminHttp: host: localhost port: 9999 endpoints: prometheus: {} The example files provided with the proxy include a PodMonitor resource. If you have enabled monitoring in OpenShift for user-defined projects, you can use a PodMonitor resource to ingest the proxy metrics. Example PodMonitor resource configuration apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: proxy labels: app: proxy spec: selector: matchLabels: app: proxy namespaceSelector: matchNames: - proxy podMetricsEndpoints: - path: /metrics port: metrics
[ "adminHttp: endpoints: prometheus: {}", "adminHttp: host: localhost port: 9999 endpoints: prometheus: {}", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: proxy labels: app: proxy spec: selector: matchLabels: app: proxy namespaceSelector: matchNames: - proxy podMetricsEndpoints: - path: /metrics port: metrics" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_proxy/proc-introducing-metrics-str
Chapter 5. Quick Start Guide to RHEL System Roles for SAP
Chapter 5. Quick Start Guide to RHEL System Roles for SAP Use the following procedures for configuring or verifying one or more systems for the installation of SAP NetWeaver or SAP HANA 5.1. Preparing the control node Use the following steps to display the system messages in English. RHEL System Roles for SAP requires that the Ansible control node uses locale C or en_US.UTF-8 . Procedure Run the command on the local host to check the current setting. # locale The output should display either C or en_US.UTF-8 in the line starting with LC_MESSAGES= . If the command does not produce the expected output, run the following command on the local host before executing the ansible-playbook command: # export LC_ALL=C Or # export LC_ALL=en_US.UTF-8 Note These steps are necessary because by default, the LC_* variables are forwarded to a remote system (see man ssh_config and man sshd_config ), and the roles are evaluating certain command outputs from remote systems. 5.2. Configuring the local system Use the following steps for preparing the local system for the installation of SAP NetWeaver Prerequisites No production software running on the system A minimum of 20480 MB of swap space is configured on the local system Procedure Make a bakckup, if you would like to preserve the original configuration of the server. Note These roles are run after the installation of RHEL, therefore a backup should not be required. Create a YAML file named sap-netweaver.yml with the following content: - hosts: localhost connection: local roles: - sap_general_preconfigure - sap_netweaver_preconfigure Important The correct indentation of 2 spaces in front of roles: is essential. Run the RHEL System Roles sap_general_preconfigure and sap_netweaver_preconfigure to prepare the managed nodes for the installation of SAP NetWeaver. # ansible-playbook sap-netweaver.yml At the end of the playbook run, the role will likely report that a reboot is required, for example because certain packages had been installed. In this case, reboot the system at this time. 5.3. Verifying the local system RHEL System roles for SAP can also be used to verify that RHEL systems are configured correctly. Use the following steps to verify if the local system is configured correctly for installation of SAP NetWeaver. Prerequisites RHEL System Roles for SAP version 3 Procedure Create a YAML file named sap-netweaver.yml with the following content: - hosts: localhost connection: local vars: sap_general_preconfigure_assert: yes sap_general_preconfigure_assert_ignore_errors: yes sap_netweaver_preconfigure_assert: yes sap_netweaver_preconfigure_assert_ignore_errors: yes roles: - sap_general_preconfigure - sap_netweaver_preconfigure Run the following command: # ansible-playbook sap-netweaver.yml In case you would like to get a more compact output, you can pipe the output to the shell script beautify-assert-output.sh , located in the tools directory of each of the preconfigure roles, to just display the essential FAIL or PASS information for each assertion. Assuming you have copied the script to directory ~/bin , the command would be: # ansible-playbook sap-netweaver-assert.yml | ./bin/beautify-assert-output.sh If you are using a terminal with dark background, replace all occurrences of color code [30m in the following command sequence by [37m . Otherwise, the output of some lines will be unreadable due to a dark font on a dark background. In case you accidentally ran the above command in a terminal with a dark background, you can re-enable the default white font again with the following command: # echo -e "\033[37mResetting font color\n" 5.4. Configuring remote systems Use the following steps for preparing one or more remote systems (managed nodes) for the installation of SAP HANA. Prerequisites Verify that the managed nodes are correctly set up for installing Red Hat software packages from a Red Hat Satellite server or the Red Hat Customer Portal. Passwordless ssh access to each managed node from the Ansible control node. Supported RHEL release for SAP HANA. For information on supported RHEL releases for SAP HANA, see SAP Note 2235581 . Procedure Make a backup if you would like to preserve the original configuration of the server. Create an inventory file or modify file /etc/ansible/hosts that contains the name of a group of hosts and each system which you intend to configure (=managed node) in a separate line (example for three hosts in a host group named sap_hana ): [sap_hana] host01 host02 host03 Verify that you can log in to all three hosts using ssh without password. Example: # ssh host01 uname -a # ssh host02 hostname # ssh host03 echo test Create a YAML file named sap-hana.yml with the following content: - hosts: sap_hana roles: - sap_general_preconfigure - sap_hana_preconfigure Important The correct indentation (e.g. 2 spaces in front of roles: ) is essential. Run the RHEL System Roles sap_general_preconfigure and sap_hana_preconfigure to prepare the managed nodes for the installation of SAP HANA. # ansible-playbook sap-hana.yml Note The roles are designed to be used right after the initial installation of a managed node. If you want to run these roles against an SAP or other production system, run them in assertion mode first so you can detect which settings deviate from SAP's recommendations as per applicable SAP notes. When run in normal mode, the roles will enforce the SAP recommended configuration on the managed node(s). Unusual system configuration settings might in rare cases still lead to unintended changes by the role. Before using the roles in normal mode on production systems, it is strongly recommended to backup the system and test the roles on a test and QA system first. At the end of the playbook run, the command will report for each managed node that a reboot is required. Reboot the managed nodes at this time. 5.5. Installing SAP software For instructions on installing the SAP HANA database or SAP S/4HANA on RHEL 8 or RHEL 9, refer to Installing SAP HANA or SAP S/4HANA with the RHEL System Roles for SAP .
[ "locale", "export LC_ALL=C", "export LC_ALL=en_US.UTF-8", "- hosts: localhost connection: local roles: - sap_general_preconfigure - sap_netweaver_preconfigure", "ansible-playbook sap-netweaver.yml", "- hosts: localhost connection: local vars: sap_general_preconfigure_assert: yes sap_general_preconfigure_assert_ignore_errors: yes sap_netweaver_preconfigure_assert: yes sap_netweaver_preconfigure_assert_ignore_errors: yes roles: - sap_general_preconfigure - sap_netweaver_preconfigure", "ansible-playbook sap-netweaver.yml", "ansible-playbook sap-netweaver-assert.yml | ./bin/beautify-assert-output.sh", "echo -e \"\\033[37mResetting font color\\n\"", "[sap_hana] host01 host02 host03", "ssh host01 uname -a ssh host02 hostname ssh host03 echo test", "- hosts: sap_hana roles: - sap_general_preconfigure - sap_hana_preconfigure", "ansible-playbook sap-hana.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/red_hat_enterprise_linux_system_roles_for_sap/assembly_quick-start-guide-to-rhel-system-roles-for-sap_rhel-system-roles-for-sap-9
Chapter 10. Related information
Chapter 10. Related information You can refer to the following instructional materials: Upgrade your Red Hat Enterprise Linux Infrastructure Red Hat Enterprise Linux technology capabilities and limits Supported in-place upgrade paths for Red Hat Enterprise Linux In-place upgrade Support Policy Considerations in adopting RHEL 9 Customizing your Red Hat Enterprise Linux in-place upgrade Automating your Red Hat Enterprise Linux pre-upgrade report workflow Using configuration management systems to automate parts of the Leapp pre-upgrade and upgrade process on Red Hat Enterprise Linux Upgrading from RHEL 7 to RHEL 8 Converting from a Linux distribution to RHEL using the Convert2RHEL utility Upgrading SAP environments from RHEL 8 to RHEL 9 Red Hat Insights Documentation Upgrades-related Knowledgebase articles and solutions (Red Hat Knowledgebase) The best practices and recommendations for performing RHEL Upgrade using Leapp Leapp upgrade FAQ (Frequently Asked Questions)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/ref_related-information_upgrading-from-rhel-8-to-rhel-9
Chapter 2. Red Hat Build of OptaPlanner 8.38 new features
Chapter 2. Red Hat Build of OptaPlanner 8.38 new features This section highlights new features in Red Hat Build of OptaPlanner 8.38. Note Bavet is a feature used for fast score calculation. Bavet is currently only available in the community version of OptaPlanner. It is not available in Red Hat Build of OptaPlanner 8.38. 2.1. Performance improvements in pillar moves and nearby selection OptaPlanner can now auto-detect situations where multiple pillar move selectors can share a precomputed pillar cache and reuse it instead of recomputing the pillar cache for each move selector. If you combine pillar moves, for example, PillarChangeMove and PillarSwapMove , you should see significant performance inprovements. This also applies if you use nearby selection. OptaPlanner can now auto-detect situations where a precomputed distance matrix can be shared between multiple move selectors, which saves memory and CPU processing time. As a consequence of this enhancement, implementations of the following interfaces are expected to be stateless: org.optaplanner.core.impl.heuristic.selector.common.nearby.NearbyDistanceMeter org.optaplanner.core.impl.heuristic.selector.common.decorator.SelectionFilter org.optaplanner.core.impl.heuristic.selector.common.decorator.SelectionProbabilityWeightFactory org.optaplanner.core.impl.heuristic.selector.common.decorator.SelectionSorter org.optaplanner.core.impl.heuristic.selector.common.decorator.SelectionSorterWeightFactory In general, if solver configuration asks the user to implement an interface, the expectation is that the implementation will be stateless or not try to include an external state. With the these performance improvements, failing to follow this requirement will result in subtle bugs and score corruption because the solver will now reuse these instances as it sees fit. 2.2. OptaPlanner configuration improvement Various configuration classes, such as EntitySelectorConfig and ValueSelectorConfig , contain new builder methods which make it easier to replace XML-based solver configuration with fluent Java code. 2.3. PlanningListVariable support for K-Opt moves A new move selector for list variables, KOptListMoveSelector , has been added. KOptListMoveSelector selects a single entity, removes k edges from its route, and adds k new edges from the removed edges' endpoints. KOptListMoveSelector can help the solver escape local optima in vehicle routing problems. 2.4. SolutionManager support for updating shadow variables SolutionManager (formerly ScoreManager ) methods such as explain(solution) and update(solution) received a new overload with an extra argument, SolutionUpdatePolicy . This is useful for users who load their solutions from persistent storage (such as a relational database), where these solutions do not include the information carried by shadow variables or the score. By calling these new overloads and picking the right policy, OptaPlanner automatically computes values for all of the shadow variables in a solution or recalculates the score, or both. Similarly, ProblemChangeDirector received a new method called updateShadowVariables() , so that you can update shadow variables on demand in real-time planning. 2.5. Value range auto-detection In most cases, links between planning variables and value ranges can now be automatically detected. Therefore, @ValueRangeProvider no longer needs to provide an ID property. Likewise, planning variables no longer need to reference value range providers through the valueRangeProviderRefs property. No code changes or configuration changes are required. Users who prefer clarity over brevity can continue to explicitly reference value range providers.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/rn-whats-new-con_release-notes
Chapter 2. About Red Hat OpenShift Pipelines
Chapter 2. About Red Hat OpenShift Pipelines Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions. Note Because Red Hat OpenShift Pipelines releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift Pipelines documentation is now available as separate documentation sets for each minor version of the product. The Red Hat OpenShift Pipelines documentation is available at https://docs.openshift.com/pipelines/ . Documentation for specific versions is available using the version selector drop-down list, or directly by adding the version to the URL, for example, https://docs.openshift.com/pipelines/1.15 . In addition, the Red Hat OpenShift Pipelines documentation is also available on the Red Hat Customer Portal at https://access.redhat.com/documentation/en-us/red_hat_openshift_pipelines/ . For additional information about the Red Hat OpenShift Pipelines life cycle and supported platforms, refer to the Platform Life Cycle Policy .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/about_openshift_pipelines/about-pipelines
Chapter 2. Differences from upstream OpenJDK 17
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.2/rn-openjdk-diff-from-upstream
Chapter 1. Creating a software bill of materials manifest file
Chapter 1. Creating a software bill of materials manifest file Red Hat Trusted Profile Analyzer can analyze both CycloneDX and Software Package Data Exchange (SPDX) SBOM formats by using the JSON file format. Many open source tools are available to you for creating Software Bill of Materials (SBOM) manifest files from container images, or for your application. For this procedure we are going to use the Syft tool. Important Currently, Trusted Profile Analyzer only supports CycloneDX version 1.3, 1.4, and 1.5, along with SPDX version 2.2, and 2.3. Prerequisites Install Syft for your workstation platform: Red Hat Ecosystem Catalog GitHub Procedure To create an SBOM by using a container image. CycloneDX format: Syntax syft IMAGE_PATH -o [email protected] Example USD syft registry:example.io/hello-world:latest -o [email protected] SPDX format: Syntax syft IMAGE_PATH -o [email protected] Example USD syft registry:example.io/hello-world:latest -o [email protected] Note Syft supports many types of container image sources. See the official supported source list on Syft's GitHub site . To create an SBOM by scanning the local file system. CycloneDX format: Syntax syft dir: DIRECTORY_PATH -o [email protected] syft file: FILE_PATH -o [email protected] Example USD syft dir:. -o [email protected] USD syft file:/example-binary -o [email protected] SPDX format: Syntax syft dir: DIRECTORY_PATH -o [email protected] syft file: FILE_PATH -o [email protected] Example USD syft dir:. -o [email protected] USD syft file:/example-binary -o [email protected] Additional resources Scanning an SBOM manifest file by using the Red Hat Trusted Profile Analyzer managed service. National Telecommunications and Information Administration's (NTIA) How-to Guide on SBOM generation .
[ "syft IMAGE_PATH -o [email protected]", "syft registry:example.io/hello-world:latest -o [email protected]", "syft IMAGE_PATH -o [email protected]", "syft registry:example.io/hello-world:latest -o [email protected]", "syft dir: DIRECTORY_PATH -o [email protected] syft file: FILE_PATH -o [email protected]", "syft dir:. -o [email protected] syft file:/example-binary -o [email protected]", "syft dir: DIRECTORY_PATH -o [email protected] syft file: FILE_PATH -o [email protected]", "syft dir:. -o [email protected] syft file:/example-binary -o [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/reference_guide/creating-an-sbom-manifest-file_ref
Chapter 2. Basics of IdM API
Chapter 2. Basics of IdM API You can use the IdM API to automate the access to IdM environment with your custom scripts. 2.1. Initializing IdM API To use the IdM API, first initialize it in your environment. Prerequisites The IdM server or IdM client package is installed. A valid Kerberos ticket is issued. Procedure To initialize the IdM API, include the following code in the beginning of your script: To establish a connection with the LDAP server, add the following logic to your script after API initialization: If you run your script on the IdM server, this logic allows your script to connect directly to LDAP server. If you run your script on the IdM client, the script uses the Remote Procedure Call (RPC) client. Additional resources IdM API context 2.2. Running IdM API commands You can run IdM API commands within your script. To run an IdM API command, use the api.Command structure in your script. Prerequisites The IdM API is initialized. For more information, see Initializing IdM API . Procedure For example, to list the information about user, include the following code in your script: In this example, you also pass arguments and options to the command user_show . Additional resources For the full list of the api.Command commands, see IPA API Commands web source. 2.3. IdM API commands output structure Each IdM API command has four sections for its output. These sections contain various information about the command execution. IdM API output structure result This section provides the result of the command. It contains various details about the command operation, such as options and arguments which were passed to the command. values This section indicates the argument for the command. messages This section shows various information which ipa tool provides after the execution of the command. summary This section shows the summary for the operation. In this example, your script executes the add_user command: The output structure of that command is below: 2.4. Listing the IdM API commands and parameters You can list information about the IdM API command and its parameters by using the commands command_show and param_show . Prerequisites The IdM API is initialized. For more information, see Initializing IdM API . Procedure To display information about user_add command, execute the following code: The result for this command is as follows: To display information about the givenname parameter for the user_add command, execute the following code: The result for this command is as follows: 2.5. Using batches for executing IdM API commands You can execute multiple IdM API commands with a single call by using the batch command. The following example shows how to create multiple IdM users. Prerequisites The IdM API is initialized. For more information, see Initializing IdM API . Procedure To create 100 IdM users in one batch, include the following code into your script: 2.6. IdM API context IdM API context determines which plug-ins the API uses. Specify the context during API initialization. For example on how to use the IdM API context, see Initializing IdM API . IdM API context server Set of plug-ins which validate arguments and options that are passed to IdM API commands for execution. client Set of plug-ins which validate arguments and options that are forwarded to the IdM server for execution. installer Set of plug-ins which are specific to the installation process. updates Set of plug-ins which are specific to the updating process.
[ "from ipalib import api api.bootstrap(context=\"server\") api.finalize()", "if api.env.in_server: api.Backend.ldap2.connect() else: api.Backend.rpcclient.connect()", "api.Command.user_show(\" user_name \", no_members=True, all=True)", "api.Command.user_add(\"test\", givenname=\"a\", sn=\"b\")", "{ \"result\": { \"displayname\": [\"a b\"], \"objectclass\": [ \"top\", \"person\", \"organizationalperson\", \"inetorgperson\", \"inetuser\", \"posixaccount\", \"krbprincipalaux\", \"krbticketpolicyaux\", \"ipaobject\", \"ipasshuser\", \"ipaSshGroupOfPubKeys\", \"mepOriginEntry\", \"ipantuserattrs\", ], \"cn\": [\"a b\"], \"gidnumber\": [\"1445000004\"], \"mail\": [\"[email protected]\"], \"krbprincipalname\": [ipapython.kerberos.Principal(\"[email protected]\")], \"loginshell\": [\"/bin/sh\"], \"initials\": [\"ab\"], \"uid\": [\"test\"], \"uidnumber\": [\"1445000004\"], \"sn\": [\"b\"], \"krbcanonicalname\": [ipapython.kerberos.Principal(\"[email protected]\")], \"homedirectory\": [\"/home/test\"], \"givenname\": [\"a\"], \"gecos\": [\"a b\"], \"ipauniqueid\": [\"9f9c1df8-5073-11ed-9a56-fa163ea98bb3\"], \"mepmanagedentry\": [ ipapython.dn.DN(\"cn=test,cn=groups,cn=accounts,dc=ipa,dc=test\") ], \"has_password\": False, \"has_keytab\": False, \"memberof_group\": [\"ipausers\"], \"dn\": ipapython.dn.DN(\"uid=test,cn=users,cn=accounts,dc=ipa,dc=test\"), }, \"value\": \"test\", \"messages\": [ { \"type\": \"warning\", \"name\": \"VersionMissing\", \"message\": \"API Version number was not sent, forward compatibility not guaranteed. Assuming server's API version, 2.248\", \"code\": 13001, \"data\": {\"server_version\": \"2.248\"}, } ], \"summary\": 'Added user \"test\"', }", "api.Command.command_show(\"user_add\")", "{ \"result\": { \"name\": \"user_add\", \"version\": \"1\", \"full_name\": \"user_add/1\", \"doc\": \"Add a new user.\", \"topic_topic\": \"user/1\", \"obj_class\": \"user/1\", \"attr_name\": \"add\", }, \"value\": \"user_add\", \"messages\": [ { \"type\": \"warning\", \"name\": \"VersionMissing\", \"message\": \"API Version number was not sent, forward compatibility not guaranteed. Assuming server's API version, 2.251\", \"code\": 13001, \"data\": {\"server_version\": \"2.251\"}, } ], \"summary\": None, }", "api.Command.param_show(\"user_add\", name=\"givenname\")", "{ \"result\": { \"name\": \"givenname\", \"type\": \"str\", \"positional\": False, \"cli_name\": \"first\", \"label\": \"First name\", }, \"value\": \"givenname\", \"messages\": [ { \"type\": \"warning\", \"name\": \"VersionMissing\", \"message\": \"API Version number was not sent, forward compatibility not guaranteed. Assuming server's API version, 2.251\", \"code\": 13001, \"data\": {\"server_version\": \"2.251\"}, } ], \"summary\": None, }", "batch_args = [] for i in range(100): user_id = \"user%i\" % i args = [user_id] kw = { 'givenname' : user_id, 'sn' : user_id } batch_args.append({ 'method' : 'user_add', 'params' : [args, kw] }) ret = api.Command.batch(*batch_args)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_api/assembly_basics-of-idm-api_using-idm-api
Part IX. Performance Tuning
Part IX. Performance Tuning This part provides recommended practices for optimizing the performance of Identity Management .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p-tuning
Chapter 40. Developing Asynchronous Applications
Chapter 40. Developing Asynchronous Applications Abstract JAX-WS provides an easy mechanism for accessing services asynchronously. The SEI can specify additional methods that can be used to access a service asynchronously. The Apache CXF code generators generate the extra methods for you. You simply add the business logic. 40.1. Types of Asynchronous Invocation In addition to the usual synchronous mode of invocation, Apache CXF supports two forms of asynchronous invocation: Polling approach - To invoke the remote operation using the polling approach, you call a method that has no output parameters, but returns a javax.xml.ws.Response object. The Response object (which inherits from the javax.util.concurrency.Future interface) can be polled to check whether or not a response message has arrived. Callback approach - To invoke the remote operation using the callback approach, you call a method that takes a reference to a callback object (of javax.xml.ws.AsyncHandler type) as one of its parameters. When the response message arrives at the client, the runtime calls back on the AsyncHandler object, and gives it the contents of the response message. 40.2. WSDL for Asynchronous Examples Example 40.1, "WSDL Contract for Asynchronous Example" shows the WSDL contract that is used for the asynchronous examples. The contract defines a single interface, GreeterAsync, which contains a single operation, greetMeSometime. Example 40.1. WSDL Contract for Asynchronous Example 40.3. Generating the Stub Code Overview The asynchronous style of invocation requires extra stub code for the dedicated asynchronous methods defined on the SEI. This special stub code is not generated by default. To switch on the asynchronous feature and generate the requisite stub code, you must use the mapping customization feature from the WSDL 2.0 specification. Customization enables you to modify the way the Maven code generation plug-in generates stub code. In particular, it enables you to modify the WSDL-to-Java mapping and to switch on certain features. Here, customization is used to switch on the asynchronous invocation feature. Customizations are specified using a binding declaration, which you define using a jaxws:bindings tag (where the jaxws prefix is tied to the http://java.sun.com/xml/ns/jaxws namespace). There are two ways of specifying a binding declaration: External Binding Declaration When using an external binding declaration the jaxws:bindings element is defined in a file separate from the WSDL contract. You specify the location of the binding declaration file to code generator when you generate the stub code. Embedded Binding Declaration When using an embedded binding declaration you embed the jaxws:bindings element directly in a WSDL contract, treating it as a WSDL extension. In this case, the settings in jaxws:bindings apply only to the immediate parent element. Using an external binding declaration The template for a binding declaration file that switches on asynchronous invocations is shown in Example 40.2, "Template for an Asynchronous Binding Declaration" . Example 40.2. Template for an Asynchronous Binding Declaration Where AffectedWSDL specifies the URL of the WSDL contract that is affected by this binding declaration. The AffectedNode is an XPath value that specifies which node (or nodes) from the WSDL contract are affected by this binding declaration. You can set AffectedNode to wsdl:definitions , if you want the entire WSDL contract to be affected. The jaxws:enableAsyncMapping element is set to true to enable the asynchronous invocation feature. For example, if you want to generate asynchronous methods only for the GreeterAsync interface, you can specify <bindings node="wsdl:definitions/wsdl:portType[@name='GreeterAsync']"> in the preceding binding declaration. Assuming that the binding declaration is stored in a file, async_binding.xml , you would set up your POM as shown in Example 40.3, "Consumer Code Generation" . Example 40.3. Consumer Code Generation The -b option tells the code generator where to locate the external binding file. For more information on the code generator see Section 44.2, "cxf-codegen-plugin" . Using an embedded binding declaration You can also embed the binding customization directly into the WSDL document defining the service by placing the jaxws:bindings element and its associated jaxws:enableAsynchMapping child directly into the WSDL. You also must add a namespace declaration for the jaxws prefix. Example 40.4, "WSDL with Embedded Binding Declaration for Asynchronous Mapping" shows a WSDL file with an embedded binding declaration that activates the asynchronous mapping for an operation. Example 40.4. WSDL with Embedded Binding Declaration for Asynchronous Mapping When embedding the binding declaration into the WSDL document you can control the scope affected by the declaration by changing where you place the declaration. When the declaration is placed as a child of the wsdl:definitions element the code generator creates asynchronous methods for all of the operations defined in the WSDL document. If it is placed as a child of a wsdl:portType element the code generator creates asynchronous methods for all of the operations defined in the interface. If it is placed as a child of a wsdl:operation element the code generator creates asynchronous methods for only that operation. It is not necessary to pass any special options to the code generator when using embedded declarations. The code generator will recognize them and act accordingly. Generated interface After generating the stub code in this way, the GreeterAsync SEI (in the file GreeterAsync.java ) is defined as shown in Example 40.5, "Service Endpoint Interface with Methods for Asynchronous Invocations" . Example 40.5. Service Endpoint Interface with Methods for Asynchronous Invocations In addition to the usual synchronous method, greetMeSometime() , two asynchronous methods are also generated for the greetMeSometime operation: Callback approach public Future<?> greetMeSomtimeAsync java.lang.String requestType AsyncHandler<GreetMeSomtimeResponse> asyncHandler Polling approach public Response<GreetMeSomeTimeResponse> greetMeSometimeAsync java.lang.String requestType 40.4. Implementing an Asynchronous Client with the Polling Approach Overview The polling approach is the more straightforward of the two approaches to developing an asynchronous application. The client invokes the asynchronous method called OperationName Async() and is returned a Response<T> object that it polls for a response. What the client does while it is waiting for a response is depends on the requirements of the application. There are two basic patterns for handling the polling: Non-blocking polling - You periodically check to see if the result is ready by calling the non-blocking Response<T>.isDone() method. If the result is ready, the client processes it. If it not, the client continues doing other things. Blocking polling - You call Response<T>.get() right away, and block until the response arrives (optionally specifying a timeout). Using the non-blocking pattern Example 40.6, "Non-Blocking Polling Approach for an Asynchronous Operation Call" illustrates using non-blocking polling to make an asynchronous invocation on the greetMeSometime operation defined in Example 40.1, "WSDL Contract for Asynchronous Example" . The client invokes the asynchronous operation and periodically checks to see if the result is returned. Example 40.6. Non-Blocking Polling Approach for an Asynchronous Operation Call The code in Example 40.6, "Non-Blocking Polling Approach for an Asynchronous Operation Call" does the following: Invokes the greetMeSometimeAsync() on the proxy. The method call returns the Response<GreetMeSometimeResponse> object to the client immediately. The Apache CXF runtime handles the details of receiving the reply from the remote endpoint and populating the Response<GreetMeSometimeResponse> object. Note The runtime transmits the request to the remote endpoint's greetMeSometime() method and handles the details of the asynchronous nature of the call transparently. The endpoint, and therefore the service implementation, never worries about the details of how the client intends to wait for a response. Checks to see if a response has arrived by checking the isDone() of the returned Response object. If the response has not arrived, the client continues working before checking again. When the response arrives, the client retrieves it from the Response object using the get() method. Using the blocking pattern When using the block polling pattern, the Response object's isDone() is never called. Instead, the Response object's get() method is called immediately after invoking the remote operation. The get() blocks until the response is available. You can also pass a timeout limit to the get() method. Example 40.7, "Blocking Polling Approach for an Asynchronous Operation Call" shows a client that uses blocking polling. Example 40.7. Blocking Polling Approach for an Asynchronous Operation Call 40.5. Implementing an Asynchronous Client with the Callback Approach Overview An alternative approach to making an asynchronous operation invocation is to implement a callback class. You then call the asynchronous remote method that takes the callback object as a parameter. The runtime returns the response to the callback object. To implement an application that uses callbacks, do the following: Create a callback class that implements the AsyncHandler interface. Note Your callback object can perform any amount of response processing required by your application. Make remote invocations using the operationName Async() that takes the callback object as a parameter and returns a Future<?> object. If your client requires access to the response data, you can poll the returned Future<?> object's isDone() method to see if the remote endpoint has sent the response. If the callback object does all of the response processing, it is not necessary to check if the response has arrived. Implementing the callback The callback class must implement the javax.xml.ws.AsyncHandler interface. The interface defines a single method: handleResponse Response<T> res The Apache CXF runtime calls the handleResponse() method to notify the client that the response has arrived. Example 40.8, "The javax.xml.ws.AsyncHandler Interface" shows an outline of the AsyncHandler interface that you must implement. Example 40.8. The javax.xml.ws.AsyncHandler Interface Example 40.9, "Callback Implementation Class" shows a callback class for the greetMeSometime operation defined in Example 40.1, "WSDL Contract for Asynchronous Example" . Example 40.9. Callback Implementation Class The callback implementation shown in Example 40.9, "Callback Implementation Class" does the following: Defines a member variable, response , that holds the response returned from the remote endpoint. Implements handleResponse() . This implementation simply extracts the response and assigns it to the member variable reply . Implements an added method called getResponse() . This method is a convenience method that extracts the data from reply and returns it. Implementing the consumer Example 40.10, "Callback Approach for an Asynchronous Operation Call" illustrates a client that uses the callback approach to make an asynchronous call to the GreetMeSometime operation defined in Example 40.1, "WSDL Contract for Asynchronous Example" . Example 40.10. Callback Approach for an Asynchronous Operation Call The code in Example 40.10, "Callback Approach for an Asynchronous Operation Call" does the following: Instantiates a callback object. Invokes the greetMeSometimeAsync() that takes the callback object on the proxy. The method call returns the Future<?> object to the client immediately. The Apache CXF runtime handles the details of receiving the reply from the remote endpoint, invoking the callback object's handleResponse() method, and populating the Response<GreetMeSometimeResponse> object. Note The runtime transmits the request to the remote endpoint's greetMeSometime() method and handles the details of the asynchronous nature of the call without the remote endpoint's knowledge. The endpoint, and therefore the service implementation, does not need to worry about the details of how the client intends to wait for a response. Uses the returned Future<?> object's isDone() method to check if the response has arrived from the remote endpoint. Invokes the callback object's getResponse() method to get the response data. 40.6. Catching Exceptions Returned from a Remote Service Overview Consumers making asynchronous requests will not receive the same exceptions returned when they make synchronous requests. Any exceptions returned to the consumer asynchronously are wrapped in an ExecutionException exception. The actual exception thrown by the service is stored in the ExecutionException exception's cause field. Catching the exception Exceptions generated by a remote service are thrown locally by the method that passes the response to the consumer's business logic. When the consumer makes a synchronous request, the method making the remote invocation throws the exception. When the consumer makes an asynchronous request, the Response<T> object's get() method throws the exception. The consumer will not discover that an error was encountered in processing the request until it attempts to retrieve the response message. Unlike the methods generated by the JAX-WS framework, the Response<T> object's get() method throws neither user modeled exceptions nor generic JAX-WS exceptions. Instead, it throws a java.util.concurrent.ExecutionException exception. Getting the exception details The framework stores the exception returned from the remote service in the ExecutionException exception's cause field. The details about the remote exception are extracted by getting the value of the cause field and examining the stored exception. The stored exception can be any user defined exception or one of the generic JAX-WS exceptions. Example Example 40.11, "Catching an Exception using the Polling Approach" shows an example of catching an exception using the polling approach. Example 40.11. Catching an Exception using the Polling Approach The code in Example 40.11, "Catching an Exception using the Polling Approach" does the following: Wraps the call to the Response<T> object's get() method in a try/catch block. Catches a ExecutionException exception. Extracts the cause field from the exception. If the consumer was using the callback approach the code used to catch the exception would be placed in the callback object where the service's response is extracted.
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?><wsdl:definitions xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://apache.org/hello_world_async_soap_http\" xmlns:x1=\"http://apache.org/hello_world_async_soap_http/types\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" targetNamespace=\"http://apache.org/hello_world_async_soap_http\" name=\"HelloWorld\"> <wsdl:types> <schema targetNamespace=\"http://apache.org/hello_world_async_soap_http/types\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:x1=\"http://apache.org/hello_world_async_soap_http/types\" elementFormDefault=\"qualified\"> <element name=\"greetMeSometime\"> <complexType> <sequence> <element name=\"requestType\" type=\"xsd:string\"/> </sequence> </complexType> </element> <element name=\"greetMeSometimeResponse\"> <complexType> <sequence> <element name=\"responseType\" type=\"xsd:string\"/> </sequence> </complexType> </element> </schema> </wsdl:types> <wsdl:message name=\"greetMeSometimeRequest\"> <wsdl:part name=\"in\" element=\"x1:greetMeSometime\"/> </wsdl:message> <wsdl:message name=\"greetMeSometimeResponse\"> <wsdl:part name=\"out\" element=\"x1:greetMeSometimeResponse\"/> </wsdl:message> <wsdl:portType name=\"GreeterAsync\"> <wsdl:operation name=\"greetMeSometime\"> <wsdl:input name=\"greetMeSometimeRequest\" message=\"tns:greetMeSometimeRequest\"/> <wsdl:output name=\"greetMeSometimeResponse\" message=\"tns:greetMeSometimeResponse\"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name=\"GreeterAsync_SOAPBinding\" type=\"tns:GreeterAsync\"> </wsdl:binding> <wsdl:service name=\"SOAPService\"> <wsdl:port name=\"SoapPort\" binding=\"tns:GreeterAsync_SOAPBinding\"> <soap:address location=\"http://localhost:9000/SoapContext/SoapPort\"/> </wsdl:port> </wsdl:service> </wsdl:definitions>", "<bindings xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" wsdlLocation=\" AffectedWSDL \" xmlns=\"http://java.sun.com/xml/ns/jaxws\"> <bindings node=\" AffectedNode \"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings>", "<plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-codegen-plugin</artifactId> <version>USD{cxf.version}</version> <executions> <execution> <id>generate-sources</id> <phase>generate-sources</phase> <configuration> <sourceRoot> outputDir </sourceRoot> <wsdlOptions> <wsdlOption> <wsdl>hello_world.wsdl</wsdl> <extraargs> <extraarg>-client</extraarg> <extraarg>-b async_binding.xml</extraarg> </extraargs> </wsdlOption> </wsdlOptions> </configuration> <goals> <goal>wsdl2java</goal> </goals> </execution> </executions> </plugin>", "<wsdl:definitions xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxws=\"http://java.sun.com/xml/ns/jaxws\" ...> <wsdl:portType name=\"GreeterAsync\"> <wsdl:operation name=\"greetMeSometime\"> <jaxws:bindings> <jaxws:enableAsyncMapping>true</jaxws:enableAsyncMapping> </jaxws:bindings> <wsdl:input name=\"greetMeSometimeRequest\" message=\"tns:greetMeSometimeRequest\"/> <wsdl:output name=\"greetMeSometimeResponse\" message=\"tns:greetMeSometimeResponse\"/> </wsdl:operation> </wsdl:portType> </wsdl:definitions>", "package org.apache.hello_world_async_soap_http; import org.apache.hello_world_async_soap_http.types.GreetMeSometimeResponse; public interface GreeterAsync { public Future<?> greetMeSometimeAsync( java.lang.String requestType, AsyncHandler<GreetMeSometimeResponse> asyncHandler ); public Response<GreetMeSometimeResponse> greetMeSometimeAsync( java.lang.String requestType ); public java.lang.String greetMeSometime( java.lang.String requestType ); }", "package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_async_soap_http\", \"SOAPService\"); private Client() {} public static void main(String args[]) throws Exception { // set up the proxy for the client Response<GreetMeSometimeResponse> greetMeSomeTimeResp = port.greetMeSometimeAsync(System.getProperty(\"user.name\")); while (!greetMeSomeTimeResp.isDone()) { // client does some work } GreetMeSometimeResponse reply = greetMeSomeTimeResp.get(); // process the response System.exit(0); } }", "package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_async_soap_http\", \"SOAPService\"); private Client() {} public static void main(String args[]) throws Exception { // set up the proxy for the client Response<GreetMeSometimeResponse> greetMeSomeTimeResp = port.greetMeSometimeAsync(System.getProperty(\"user.name\")); GreetMeSometimeResponse reply = greetMeSomeTimeResp.get(); // process the response System.exit(0); } }", "public interface javax.xml.ws.AsyncHandler { void handleResponse(Response<T> res) }", "package demo.hw.client; import javax.xml.ws.AsyncHandler; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.types.*; public class GreeterAsyncHandler implements AsyncHandler<GreetMeSometimeResponse> { private GreetMeSometimeResponse reply; public void handleResponse(Response<GreetMeSometimeResponse> response) { try { reply = response.get(); } catch (Exception ex) { ex.printStackTrace(); } } public String getResponse() { return reply.getResponseType(); } }", "package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { public static void main(String args[]) throws Exception { // Callback approach GreeterAsyncHandler callback = new GreeterAsyncHandler(); Future<?> response = port.greetMeSometimeAsync(System.getProperty(\"user.name\"), callback); while (!response.isDone()) { // Do some work } resp = callback.getResponse(); System.exit(0); } }", "package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_async_soap_http\", \"SOAPService\"); private Client() {} public static void main(String args[]) throws Exception { // port is a previously established proxy object. Response<GreetMeSometimeResponse> resp = port.greetMeSometimeAsync(System.getProperty(\"user.name\")); while (!resp.isDone()) { // client does some work } try { GreetMeSometimeResponse reply = greetMeSomeTimeResp.get(); // process the response } catch (ExecutionException ee) { Throwable cause = ee.getCause(); System.out.println(\"Exception \"+cause.getClass().getName()+\" thrown by the remote service.\"); } } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSAsyncDev
Chapter 72. Kubernetes Persistent Volume Claim
Chapter 72. Kubernetes Persistent Volume Claim Since Camel 2.17 Only producer is supported The Kubernetes Persistent Volume Claim component is one of the Kubernetes Components which provides a producer to execute Kubernetes Persistent Volume Claims operations. 72.1. Dependencies When using kubernetes-persistent-volumes-claims with Red Hat build of Apache Camel for Spring Boot,use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 72.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 72.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 72.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 72.3. Component Options The Kubernetes Persistent Volume Claim component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 72.4. Endpoint Options The Kubernetes Persistent Volume Claim endpoint is configured using URI syntax: with the following path and query parameters: 72.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 72.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 72.5. Message Headers The Kubernetes Persistent Volume Claim component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesPersistentVolumesClaimsLabels (producer) Constant: KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS The persistent volume claim labels. Map CamelKubernetesPersistentVolumeClaimName (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_CLAIM_NAME The persistent volume claim name. String CamelKubernetesPersistentVolumeClaimSpec (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_CLAIM_SPEC The spec for a persistent volume claim. PersistentVolumeClaimSpec 72.6. Supported producer operation listPersistentVolumesClaims listPersistentVolumesClaimsByLabels getPersistentVolumeClaim createPersistentVolumeClaim updatePersistentVolumeClaim deletePersistentVolumeClaim 72.7. Kubernetes Persistent Volume Claims Producer Examples listPersistentVolumesClaims: this operation lists the pvc on a kubernetes cluster. from("direct:list"). toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims"). to("mock:result"); This operation returns a List of pvc from your cluster. listPersistentVolumesClaimsByLabels: this operation lists the pvc by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); } }); toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels"). to("mock:result"); This operation returns a List of pvc from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 72.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-persistent-volumes-claims:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); } }); toF(\"kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-persistent-volume-claim-component-starter
Chapter 1. Some Key Definitions
Chapter 1. Some Key Definitions 1.1. Result Set Caching Red Hat JBoss Data Virtualization allows you to cache the results of your queries and virtual procedure calls. This caching technique can yield significant performance gains if you find you are frequently running the same queries or procedures.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/chap-definitions
Chapter 2. Configuring your firewall
Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS 2.2. OpenShift Container Platform network flow matrix The network flow matrix describes the ingress flows to OpenShift Container Platform services. The network information in the matrix is accurate for both bare-metal and cloud environments. Use the information in the network flow matrix to help you manage ingress traffic. You can restrict ingress traffic to essential flows to improve network security. To view or download the raw CSV content, see this resource . Additionally, consider the following dynamic port ranges when managing ingress traffic: 9000-9999 : Host level services 30000-32767 : Kubernetes node ports 49152-65535 : Dynamic or private ports Note The network flow matrix describes ingress traffic flows for a base OpenShift Container Platform installation. It does not describe network flows for additional components, such as optional Operators available from the Red Hat Marketplace. The matrix does not apply for hosted control planes, Red Hat build of MicroShift, or standalone clusters. Table 2.1. Network flow matrix Direction Protocol Port Namespace Service Pod Container Node Role Optional Ingress TCP 22 Host system service sshd master TRUE Ingress TCP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress TCP 80 openshift-ingress router-default router-default router master FALSE Ingress TCP 111 Host system service rpcbind master TRUE Ingress TCP 443 openshift-ingress router-default router-default router master FALSE Ingress TCP 1936 openshift-ingress router-default router-default router master FALSE Ingress TCP 2379 openshift-etcd etcd etcd etcdctl master FALSE Ingress TCP 2380 openshift-etcd healthz etcd etcd master FALSE Ingress TCP 5050 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 6080 openshift-kube-apiserver kube-apiserver kube-apiserver-insecure-readyz master FALSE Ingress TCP 6180 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6183 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6385 openshift-machine-api ironic-proxy ironic-proxy master FALSE Ingress TCP 6388 openshift-machine-api metal3-state metal3 metal3-httpd master FALSE Ingress TCP 6443 openshift-kube-apiserver apiserver kube-apiserver kube-apiserver master FALSE Ingress TCP 8080 openshift-network-operator network-operator network-operator master FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon master FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy master FALSE Ingress TCP 9099 openshift-cluster-version cluster-version-operator cluster-version-operator cluster-version-operator master FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy master FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node master FALSE Ingress TCP 9104 openshift-network-operator metrics network-operator network-operator master FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics master FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller master FALSE Ingress TCP 9108 openshift-ovn-kubernetes ovn-kubernetes-control-plane ovnkube-control-plane kube-rbac-proxy master FALSE Ingress TCP 9192 openshift-cluster-machine-approver machine-approver machine-approver kube-rbac-proxy master FALSE Ingress TCP 9258 openshift-cloud-controller-manager-operator machine-approver cluster-cloud-controller-manager cluster-cloud-controller-manager master FALSE Ingress TCP 9444 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9445 openshift-kni-infra haproxy haproxy master FALSE Ingress TCP 9447 openshift-machine-api metal3-baremetal-operator master FALSE Ingress TCP 9537 Host system service crio-metrics master FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio master FALSE Ingress TCP 9978 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9979 openshift-etcd etcd etcd etcd-metrics master FALSE Ingress TCP 9980 openshift-etcd etcd etcd etcd master FALSE Ingress TCP 10250 Host system service kubelet master FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller master FALSE Ingress TCP 10257 openshift-kube-controller-manager kube-controller-manager kube-controller-manager kube-controller-manager master FALSE Ingress TCP 10258 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10259 openshift-kube-scheduler scheduler openshift-kube-scheduler kube-scheduler master FALSE Ingress TCP 10260 openshift-cloud-controller-manager-operator cloud-controller cloud-controller-manager cloud-controller-manager master FALSE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver master FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar master FALSE Ingress TCP 10357 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 17697 openshift-kube-apiserver openshift-kube-apiserver-healthz kube-apiserver kube-apiserver-check-endpoints master FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns master FALSE Ingress TCP 22623 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress TCP 22624 openshift-machine-config-operator machine-config-server machine-config-server machine-config-server master FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns master FALSE Ingress UDP 111 Host system service rpcbind master TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve master FALSE Ingress TCP 22 Host system service sshd worker TRUE Ingress TCP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress TCP 80 openshift-ingress router-default router-default router worker FALSE Ingress TCP 111 Host system service rpcbind worker TRUE Ingress TCP 443 openshift-ingress router-default router-default router worker FALSE Ingress TCP 1936 openshift-ingress router-default router-default router worker FALSE Ingress TCP 8798 openshift-machine-config-operator machine-config-daemon machine-config-daemon machine-config-daemon worker FALSE Ingress TCP 9001 openshift-machine-config-operator machine-config-daemon machine-config-daemon kube-rbac-proxy worker FALSE Ingress TCP 9100 openshift-monitoring node-exporter node-exporter kube-rbac-proxy worker FALSE Ingress TCP 9103 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-node worker FALSE Ingress TCP 9105 openshift-ovn-kubernetes ovn-kubernetes-node ovnkube-node kube-rbac-proxy-ovn-metrics worker FALSE Ingress TCP 9107 openshift-ovn-kubernetes egressip-node-healthcheck ovnkube-node ovnkube-controller worker FALSE Ingress TCP 9537 Host system service crio-metrics worker FALSE Ingress TCP 9637 openshift-machine-config-operator kube-rbac-proxy-crio kube-rbac-proxy-crio kube-rbac-proxy-crio worker FALSE Ingress TCP 10250 Host system service kubelet worker FALSE Ingress TCP 10256 openshift-ovn-kubernetes ovnkube ovnkube ovnkube-controller worker TRUE Ingress TCP 10300 openshift-cluster-csi-drivers csi-livenessprobe csi-driver-node csi-driver worker FALSE Ingress TCP 10309 openshift-cluster-csi-drivers csi-node-driver csi-driver-node csi-node-driver-registrar worker FALSE Ingress TCP 18080 openshift-kni-infra coredns coredns worker FALSE Ingress UDP 53 openshift-dns dns-default dnf-default dns worker FALSE Ingress UDP 111 Host system service rpcbind worker TRUE Ingress UDP 6081 openshift-ovn-kubernetes ovn-kubernetes geneve worker FALSE
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_configuration/configuring-firewall
3.16. Port Mirroring
3.16. Port Mirroring Port mirroring copies layer 3 network traffic on a given logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network. The only traffic copied is internal to one logical network on one host. There is no increase in traffic on the network external to the host. However, a virtual machine with port mirroring enabled uses more host CPU and RAM than other virtual machines. Port mirroring is enabled or disabled in the vNIC profiles of logical networks, and has the following limitations: Hot linking vNICs with a profile that has port mirroring enabled is not supported. Port mirroring cannot be altered when the vNIC profile is attached to a virtual machine. Given the above limitations, it is recommended that you enable port mirroring on an additional, dedicated vNIC profile. Important Enabling port mirroring reduces the privacy of other network users.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/port_mirroring
Chapter 13. Editing applications
Chapter 13. Editing applications You can edit the configuration and the source code of the application you create using the Topology view. 13.1. Prerequisites You have the appropriate roles and permissions in a project to create and modify applications in OpenShift Container Platform. You have created and deployed an application on OpenShift Container Platform using the Developer perspective . You have logged in to the web console and have switched to the Developer perspective . 13.2. Editing the source code of an application using the Developer perspective You can use the Topology view in the Developer perspective to edit the source code of your application. Procedure In the Topology view, click the Edit Source code icon, displayed at the bottom-right of the deployed application, to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. If the Eclipse Che Operator is installed in your cluster, a Che workspace ( ) is created and you are directed to the workspace to edit your source code. If it is not installed, you will be directed to the Git repository ( ) your source code is hosted in. 13.3. Editing the application configuration using the Developer perspective You can use the Topology view in the Developer perspective to edit the configuration of your application. Note Currently, only configurations of applications created by using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow of the Developer perspective can be edited. Configurations of applications created by using the CLI or the YAML option from the Add workflow cannot be edited. Prerequisites Ensure that you have created an application using the From Git , Container Image , From Catalog , or From Dockerfile options in the Add workflow. Procedure After you have created an application and it is displayed in the Topology view, right-click the application to see the edit options available. Figure 13.1. Edit application Click Edit application-name to see the Add workflow you used to create the application. The form is pre-populated with the values you had added while creating the application. Edit the necessary values for the application. Note You cannot edit the Name field in the General section, the CI/CD pipelines, or the Create a route to the application field in the Advanced Options section. Click Save to restart the build and deploy a new image. Figure 13.2. Edit and redeploy application
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/odc-editing-applications
Chapter 3. Deploy OpenShift Data Foundation using IBM FlashSystem
Chapter 3. Deploy OpenShift Data Foundation using IBM FlashSystem OpenShift Data Foundation can use IBM FlashSystem storage available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for IBM FlashSystem storage. 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage You need to create a new OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator on the OpenShift Container Platform. Prerequisites A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . For Red Hat Enterprise Linux(R) operating system, ensure that there is iSCSI connectivity and then configure Linux multipath devices on the host. For Red Hat Enterprise Linux CoreOS or when the packages are already installed, configure Linux multipath devices on the host. Ensure to configure each worker with storage connectivity according to your storage system instructions. For the latest supported FlashSystem storage systems and versions, see IBM ODF FlashSystem driver documentation . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation and then click Create StorageSystem . In the Backing storage page, select the following options: Select Full deployment for the Deployment type option. Select Connect an external storage platform from the available options. Select IBM FlashSystem Storage from the Storage platform list. Click . In the Create storage class page, provide the following information: Enter a name for the storage class. When creating block storage persistent volumes, select the storage class <storage_class_name> for best performance. The storage class allows direct I/O path to the FlashSystem. Enter the following details of IBM FlashSystem connection: IP address User name Password Pool name Select thick or thin for the Volume mode . Click . In the Capacity and nodes page, provide the necessary details: Select a value for Requested capacity. The available options are 0.5 TiB , 2 TiB , and 4 TiB . The requested capacity is dynamically allocated on the infrastructure storage class. Select at least three nodes in three different zones. It is recommended to start with at least 14 CPUs and 34 GiB of RAM per node. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, provide the necessary details: To enable encryption, select Enable data encryption for block and file storage . Choose any one or both Encryption level: Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace. Provide CA Certificate, Client Certificate, and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Select Default (SDN) if you are using a single network or Custom (Multus) if you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. NOTE: If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review if all the details are correct: To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification Steps Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Table 3.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) ibm-storage-odf-operator ibm-storage-odf-operator-* (2 pods on any worker nodes) ibm-odf-console-* ibm-flashsystem-storage ibm-flashsystem-storage-* (1 pod on any worker node) rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI ibm-block-csi-* (1 pod on any worker node) Verifying that the OpenShift Data Foundation cluster is healthy In the Web Console, click Storage Data Foundation . In the Status card of the Overview tab, verify that the Storage System has a green tick mark. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . Verifying that the Multicloud Object Gateway is healthy In the Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . Verifying that IBM FlashSystem is connected and the storage cluster is ready Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external IBM FlashSystem. Verifying the StorageSystem of the storage Run the following command to verify the storageSystem of IBM FlashSystem storage cluster. Verifying the subscription of the IBM operator Run the following command to verify the subscription: Verifying the CSVs Run the following command to verify that the CSVs are in the succeeded state. Verifying the IBM operator and CSI pods Run the following command to verify the IBM operator and CSI pods:
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get flashsystemclusters.odf.ibm.com NAME AGE PHASE CREATED AT ibm-flashsystemcluster 35s 2021-09-23T07:44:52Z", "oc get storagesystems.odf.openshift.io NAME STORAGE-SYSTEM-KIND STORAGE-SYSTEM-NAME ibm-flashsystemcluster-storagesystem flashsystemcluster.odf.ibm.com/v1alpha1 ibm-flashsystemcluster ocs-storagecluster-storagesystem storagecluster.ocs.openshift.io/v1 ocs-storagecluster", "oc get subscriptions.operators.coreos.com NAME PACKAGE SOURCE CHANNEL ibm-block-csi-operator-stable-certified-operators-openshift-marketplace ibm-block-csi-operator certified-operators stable ibm-storage-odf-operator ibm-storage-odf-operator odf-catalogsource stable-v1 noobaa-operator-alpha-odf-catalogsource-openshift-storage noobaa-operator odf-catalogsource alpha ocs-operator-alpha-odf-catalogsource-openshift-storage ocs-operator odf-catalogsource alpha odf-operator odf-operator odf-catalogsource alpha", "oc get csv NAME DISPLAY VERSION REPLACES PHASE ibm-block-csi-operator.v1.6.0 Operator for IBM block storage CSI driver 1.6.0 ibm-block-csi-operator.v1.5.0 Succeeded ibm-storage-odf-operator.v0.2.1 IBM Storage ODF operator 0.2.1 Installing noobaa-operator.v5.9.0 NooBaa Operator 5.9.0 Succeeded ocs-operator.v4.14.0 OpenShift Container Storage 4.14.0 Succeeded odf-operator.v4.14.0 OpenShift Data Foundation 4.14.0 Succeeded", "oc get pods NAME READY STATUS RESTARTS AGE 5cb2b16ec2b11bf63dbe691d44a63535dc026bb5315d5075dc6c398b3c58l94 0/1 Completed 0 10m 7c806f6568f85cf10d72508261a2535c220429b54dbcf87349b9b4b9838fctg 0/1 Completed 0 8m47s c4b05566c04876677a22d39fc9c02512401d0962109610e85c8fb900d3jd7k2 0/1 Completed 0 10m c5d1376974666727b02bf25b3a4828241612186744ef417a668b4bc1759rzts 0/1 Completed 0 10m ibm-block-csi-operator-7b656d6cc8-bqnwp 1/1 Running 0 8m3s ibm-odf-console-97cb7c84c-r52dq 0/1 ContainerCreating 0 8m4s ibm-storage-odf-operator-57b8bc47df-mgkc7 1/2 ImagePullBackOff 0 94s noobaa-operator-7698579d56-x2zqs 1/1 Running 0 9m37s ocs-metrics-exporter-94b57d764-zq2g2 1/1 Running 0 9m32s ocs-operator-5d96d778f6-vxlq5 1/1 Running 0 9m33s odf-catalogsource-j7q72 1/1 Running 0 10m odf-console-8987868cd-m7v29 1/1 Running 0 9m35s odf-operator-controller-manager-5dbf785564-rwsgq 2/2 Running 0 9m35s rook-ceph-operator-68b4b976d8-dlc6w 1/1 Running 0 9m32s" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-ibm-flashsystem
Chapter 12. Accessing the RADOS Object Gateway S3 endpoint
Chapter 12. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. Prerequisites A running OpenShift Data Foundation Platform Procedure Run oc get service command to get the RGW service name. Run oc expose command to expose the RGW service. Replace <RGW-service name> with the RGW service name from the step. Replace <route name> with a route you want to create for the RGW service. For example: Run oc get route command to confirm oc expose is successful and there is an RGW route. Example output: Verification To verify the ENDPOINT , run the following command: Replace <ENDPOINT> with the route that you get from the command in step 3. For example: Important To get the access key and secret of the default user ocs-storagecluster-cephobjectstoreuser , run the following commands: Access key: Secret key:
[ "oc get service -n openshift-storage NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE (...) rook-ceph-rgw-ocs-storagecluster-cephobjectstore ClusterIP 172.30.145.254 <none> 80/TCP,443/TCP 5d7h (...)", "oc expose svc/<RGW service name> --hostname=<route name>", "oc expose svc/rook-ceph-rgw-ocs-storagecluster-cephobjectstore --hostname=rook-ceph-rgw-ocs.ocp.host.example.com", "oc get route -n openshift-storage", "NAME HOST/PORT PATH rook-ceph-rgw-ocs-storagecluster-cephobjectstore rook-ceph-rgw-ocs.ocp.host.example.com SERVICES PORT TERMINATION WILDCARD rook-ceph-rgw-ocs-storagecluster-cephobjectstore http <none>", "aws s3 --no-verify-ssl --endpoint <ENDPOINT> ls", "aws s3 --no-verify-ssl --endpoint http://rook-ceph-rgw-ocs.ocp.host.example.com ls", "oc get secret rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -n openshift-storage -o yaml | grep -w \"AccessKey:\" | head -n1 | awk '{print USD2}' | base64 --decode", "oc get secret rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -n openshift-storage -o yaml | grep -w \"SecretKey:\" | head -n1 | awk '{print USD2}' | base64 --decode" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/Accessing-the-RADOS-Object-Gateway-S3-endpoint_rhodf
Chapter 9. Gathering the observability data from multiple clusters
Chapter 9. Gathering the observability data from multiple clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1. Procedure Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {} A self-signed certificate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io A CA issuer. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret The client and server certificates. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 2 issuerRef: name: ca-issuer 1 List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance. 2 List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance. Create a service account for the OpenTelemetry Collector instance. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespace resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters. Example OpenTelemetryCollector custom resource for the edge clusters apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster. Example OpenTelemetryCollector custom resource for the central cluster apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: "deployment" ingress: type: route route: termination: "passthrough" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: "tempo-<simplest>-distributor:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector receiver requires the certificates listed in the first step. 2 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created.
[ "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/red_hat_build_of_opentelemetry/otel-gathering-observability-data-from-multiple-clusters
1.5. Red Hat Certificate System services
1.5. Red Hat Certificate System services There are various different interfaces for managing certificates and subsystems, depending on the user type: administrators, agents, auditors, and end users. For an overview of the different functions that are performed through each interface, see the User Interfaces section.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/admin_console
Providing feedback on Red Hat build of Quarkus documentation
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/proc_providing-feedback-on-red-hat-documentation_quarkus-building-native-executable
Chapter 7. Dynamic provisioning
Chapter 7. Dynamic provisioning 7.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 7.2. Available dynamic provisioning plugins OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 7.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 7.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plugin to plugin. 7.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 7.3.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Volume type created in Cinder. Default is empty. 3 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 4 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. 7.3.5. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp2 , sc1 , st1 . The default is gp2 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.3.6. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 3 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 4 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 7.3.7. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 7.3.7.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 7.3.8. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select either pd-standard or pd-ssd . The default is pd-standard . 7.3.9. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation . 3 diskformat : thin , zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value is thin . 7.4. Changing the default storage class Use the following process to change the default storage class. For example you have two defined storage classes, gp2 and standard , and you want to change the default storage class from gp2 to standard . List the storage class: USD oc get storageclass Example output NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) denotes the default storage class. Change the value of the storageclass.kubernetes.io/is-default-class annotation to false for the default storage class: USD oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by setting the storageclass.kubernetes.io/is-default-class annotation to true : USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
[ "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3", "oc get storageclass", "NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/storage/dynamic-provisioning
Chapter 5. Using automount in IdM
Chapter 5. Using automount in IdM Automount is a way to manage, organize, and access directories across multiple systems. Automount automatically mounts a directory whenever access to it is requested. This works well within an Identity Management (IdM) domain as it allows you to share directories on clients within the domain easily. The example uses the following scenario: nfs-server.idm.example.com is the fully-qualified domain name (FQDN) of a Network File System (NFS) server. For the sake of simplicity, nfs-server.idm.example.com is an IdM client that provides the maps for the raleigh automount location. Note An automount location is a unique set of NFS maps. Ideally, these maps are all located in the same geographical region so that, for example, the clients can benefit from fast connections, but this is not mandatory. The NFS server exports the /exports/project directory as read-write. Any IdM user belonging to the developers group can access the contents of the exported directory as /devel/project/ on any IdM client that uses the raleigh automount location. idm-client.idm.example.com is an IdM client that uses the raleigh automount location. Important If you want to use a Samba server instead of an NFS server to provide the shares for IdM clients, see the Red Hat Knowledgebase solution How do I configure kerberized CIFS mounts with Autofs in an IPA environment? . 5.1. Autofs and automount in IdM The autofs service automates the mounting of directories, as needed, by directing the automount daemon to mount directories when they are accessed. In addition, after a period of inactivity, autofs directs automount to unmount auto-mounted directories. Unlike static mounting, on-demand mounting saves system resources. Automount maps On a system that utilizes autofs , the automount configuration is stored in several different files. The primary automount configuration file is /etc/auto.master , which contains the master mapping of automount mount points, and their associated resources, on a system. This mapping is known as automount maps . The /etc/auto.master configuration file contains the master map . It can contain references to other maps. These maps can either be direct or indirect. Direct maps use absolute path names for their mount points, while indirect maps use relative path names. Automount configuration in IdM While automount typically retrieves its map data from the local /etc/auto.master and associated files, it can also retrieve map data from other sources. One common source is an LDAP server. In the context of Identity Management (IdM), this is a 389 Directory Server. If a system that uses autofs is a client in an IdM domain, the automount configuration is not stored in local configuration files. Instead, the autofs configuration, such as maps, locations, and keys, is stored as LDAP entries in the IdM directory. For example, for the idm.example.com IdM domain, the default master map is stored as follows: Additional resources Mounting file systems on demand 5.2. Setting up an NFS server with Kerberos in a Red Hat Enterprise Linux Identity Management domain If you use Red Hat Enterprise Linux Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption. Prerequisites The NFS server is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. The NFS server is running and configured. Procedure Obtain a kerberos ticket as an IdM administrator: Create a nfs/<FQDN> service principal: Retrieve the nfs service principal from IdM, and store it in the /etc/krb5.keytab file: Optional: Display the principals in the /etc/krb5.keytab file: By default, the IdM client adds the host principal to the /etc/krb5.keytab file when you join the host to the IdM domain. If the host principal is missing, use the ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab command to add it. Use the ipa-client-automount utility to configure mapping of IdM IDs: Update your /etc/exports file, and add the Kerberos security method to the client options. For example: If you want that your clients can select from multiple security methods, specify them separated by colons: Reload the exported file systems: 5.3. Configuring automount locations and maps in IdM using the IdM CLI A location is a set of maps, which are all stored in auto.master . A location can store multiple maps. The location entry only works as a container for map entries; it is not an automount configuration in and of itself. As a system administrator in Identity Management (IdM), you can configure automount locations and maps in IdM so that IdM users in the specified locations can access shares exported by an NFS server by navigating to specific mount points on their hosts. Both the exported NFS server directory and the mount points are specified in the maps. The example describes how to configure the raleigh location and a map that mounts the nfs-server.idm.example.com:/exports/project share on the /devel/ mount point on the IdM client as a read-write directory. Prerequisites You are logged in as an IdM administrator on any IdM-enrolled host. Procedure Create the raleigh automount location: Create an auto.devel automount map in the raleigh location: Add the keys and mount information for the exports/ share: Add the key and mount information for the auto.devel map: Add the key and mount information for the auto.master map: 5.4. Configuring automount on an IdM client As an Identity Management (IdM) system administrator, you can configure automount services on an IdM client so that NFS shares configured for a location to which the client has been added are accessible to an IdM user automatically when the user logs in to the client. The example describes how to configure an IdM client to use automount services that are available in the raleigh location. Prerequisites You have root access to the IdM client. You are logged in as IdM administrator. The automount location exists. The example location is raleigh . Procedure On the IdM client, enter the ipa-client-automount command and specify the location. Use the -U option to run the script unattended: Stop the autofs service, clear the SSSD cache, and start the autofs service to load the new configuration settings: 5.5. Verifying that an IdM user can access NFS shares on an IdM client As an Identity Management (IdM) system administrator, you can test if an IdM user that is a member of a specific group can access NFS shares when logged in to a specific IdM client. In the example, the following scenario is tested: An IdM user named idm_user belonging to the developers group can read and write the contents of the files in the /devel/project directory automounted on idm-client.idm.example.com , an IdM client located in the raleigh automount location. Prerequisites You have set up an NFS server with Kerberos on an IdM host . You have configured automount locations, maps, and mount points in IdM in which you configured how IdM users can access the NFS share. You have configured automount on the IdM client . Procedure Verify that the IdM user can access the read-write directory: Connect to the IdM client as the IdM user: Obtain the ticket-granting ticket (TGT) for the IdM user: Optional: View the group membership of the IdM user: Navigate to the /devel/project directory: List the directory contents: Add a line to the file in the directory to test the write permission: Optional: View the updated contents of the file: The output confirms that idm_user can write into the file.
[ "dn: automountmapname=auto.master,cn=default,cn=automount,dc=idm,dc=example,dc=com objectClass: automountMap objectClass: top automountMapName: auto.master", "kinit admin", "ipa service-add nfs/nfs_server.idm.example.com", "ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab", "klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 nfs/[email protected] 1 nfs/[email protected] 1 nfs/[email protected] 1 nfs/[email protected] 7 host/[email protected] 7 host/[email protected] 7 host/[email protected] 7 host/[email protected]", "ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs", "/nfs/projects/ 192.0.2.0/24(rw, sec=krb5i )", "/nfs/projects/ 192.0.2.0/24(rw, sec=krb5:krb5i:krb5p )", "exportfs -r", "ipa automountlocation-add raleigh ---------------------------------- Added automount location \"raleigh\" ---------------------------------- Location: raleigh", "ipa automountmap-add raleigh auto.devel -------------------------------- Added automount map \"auto.devel\" -------------------------------- Map: auto.devel", "ipa automountkey-add raleigh auto.devel --key='*' --info='-sec=krb5p,vers=4 nfs-server.idm.example.com:/exports/&' ----------------------- Added automount key \"*\" ----------------------- Key: * Mount information: -sec=krb5p,vers=4 nfs-server.idm.example.com:/exports/&", "ipa automountkey-add raleigh auto.master --key=/devel --info=auto.devel ---------------------------- Added automount key \"/devel\" ---------------------------- Key: /devel Mount information: auto.devel", "ipa-client-automount --location raleigh -U", "systemctl stop autofs ; sss_cache -E ; systemctl start autofs", "ssh [email protected] Password:", "kinit idm_user", "ipa user-show idm_user User login: idm_user [...] Member of groups: developers, ipausers", "cd /devel/project", "ls rw_file", "echo \"idm_user can write into the file\" > rw_file", "cat rw_file this is a read-write file idm_user can write into the file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_external_red_hat_utilities_with_identity_management/using-automount-in-idm_using-external-red-hat-utilities-with-idm
Chapter 19. PersistentClaimStorageOverride schema reference
Chapter 19. PersistentClaimStorageOverride schema reference Used in: PersistentClaimStorage Property Description class The storage class to use for dynamic volume allocation for this broker. string broker Id of the kafka broker (broker identifier). integer
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-persistentclaimstorageoverride-reference
Appendix C. Building cloud images for Red Hat Satellite
Appendix C. Building cloud images for Red Hat Satellite Use this section to build and register images to Red Hat Satellite. You can use a preconfigured Red Hat Enterprise Linux KVM guest QCOW2 image: Latest RHEL 9 KVM Guest Image Latest RHEL 8 KVM Guest Image These images contain cloud-init . To function properly, they must use ec2-compatible metadata services for provisioning an SSH key. Note For the KVM guest images: The root account in the image is disabled, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. If you want to create custom Red Hat Enterprise Linux images, see Composing a customized Red Hat Enterprise Linux 9 Image or Composing a customized Red Hat Enterprise Linux 8 Image . C.1. Creating custom Red Hat Enterprise Linux images Prerequisites Use a Linux host machine to create an image. In this example, we use a Red Hat Enterprise Linux 7 Workstation. Use virt-manager on your workstation to complete this procedure. If you create the image on a remote server, connect to the server from your workstation with virt-manager . A Red Hat Enterprise Linux 7 or 6 ISO file (see Red Hat Enterprise Linux 7.4 Binary DVD or Red Hat Enterprise Linux 6.9 Binary DVD ). For more information about installing a Red Hat Enterprise Linux Workstation, see Red Hat Enterprise Linux 7 Installation Guide . Before you can create custom images, install the following packages: Install libvirt , qemu-kvm , and graphical tools: Install the following command line tools: Note In the following procedures, enter all commands with the [root@host]# prompt on the workstation that hosts the libvirt environment. C.2. Supported clients in registration Satellite supports the following operating systems and architectures for registration. Supported host operating systems The hosts can use the following operating systems: Red Hat Enterprise Linux 9, 8, 7 Red Hat Enterprise Linux 6 with the ELS Add-On Supported host architectures The hosts can use the following architectures: i386 x86_64 s390x ppc_64 C.3. Configuring a host for registration Configure your host for registration to Satellite Server or Capsule Server. You can use a configuration management tool to configure multiple hosts at once. Prerequisites The host must be using a supported operating system. For more information, see Section C.2, "Supported clients in registration" . The system clock on your Satellite Server and any Capsule Servers must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail. For example, you can use the Chrony suite for timekeeping. Procedure Enable and start a time-synchronization tool on your host. The host must be synchronized with the same NTP server as Satellite Server and any Capsule Servers. On Red Hat Enterprise Linux 7 and later: On Red Hat Enterprise Linux 6: Deploy the SSL CA file on your host so that the host can make a secured registration call. Find where Satellite stores the SSL CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting. Transfer the SSL CA file to your host securely, for example by using scp . Login to your host by using SSH. Copy the certificate to the truststore: Update the truststore: C.4. Registering a host You can register a host by using registration templates and set up various integration features and host tools during the registration process. Prerequisites Your Satellite account has the Register hosts role assigned or a role with equivalent permissions. You must have root privileges on the host that you want to register. You have configured the host for registration. For more information, see Section C.3, "Configuring a host for registration" . An activation key must be available for the host. For more information, see Managing Activation Keys in Managing content . Optional: If you want to register hosts to Red Hat Insights, you must synchronize the rhel-8-for-x86_64-baseos-rpms and rhel-8-for-x86_64-appstream-rpms repositories and make them available in the activation key that you use. This is required to install the insights-client package on hosts. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . This repository is required for the remote execution pull client, Puppet agent, Tracer, and other tools. If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . If your Satellite Server or Capsule Server is behind an HTTP proxy, configure the Subscription Manager on your host to use the HTTP proxy for connection. For more information, see How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy in the Red Hat Knowledgebase . Procedure In the Satellite web UI, navigate to Hosts > Register Host . Enter the details for how you want the registered hosts to be configured. On the General tab, in the Activation Keys field, enter one or more activation keys to assign to hosts. Click Generate to generate a curl command. Run the curl command as root on the host that you want to register. After registration completes, any Ansible roles assigned to a host group you specified when configuring the registration template will run on the host. The registration details that you can specify include the following: On the General tab, in the Capsule field, you can select the Capsule to register hosts through. A Capsule behind a load balancer takes precedence over a Capsule selected in the Satellite web UI as the content source of the host. On the General tab, you can select the Insecure option to make the first call insecure. During this first call, the host downloads the CA file from Satellite. The host will use this CA file to connect to Satellite with all future calls making them secure. Red Hat recommends that you avoid insecure calls. If an attacker, located in the network between Satellite and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key. On the Advanced tab, in the Repositories field, you can list repositories to be added before the registration is performed. You do not have to specify repositories if you provide them in an activation key. On the Advanced tab, in the Token lifetime (hours) field, you can change the validity duration of the JSON Web Token (JWT) that Satellite uses for authentication. The duration of this token defines how long the generated curl command works. Note that Satellite applies the permissions of the user who generates the curl command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else. CLI procedure Use the hammer host-registration generate-command to generate the curl command to register the host. On the host that you want to register, run the curl command as root . For more information, see the Hammer CLI help with hammer host-registration generate-command --help . Ansible procedure Use the redhat.satellite.registration_command module. For more information, see the Ansible module documentation with ansible-doc redhat.satellite.registration_command . API procedure Use the POST /api/registration_commands resource. For more information, see the full API reference at https://satellite.example.com/apidoc/v2.html . C.5. Installing and configuring Puppet agent manually You can install and configure the Puppet agent on a host manually. A configured Puppet agent is required on the host for Puppet integration with your Satellite. For more information about Puppet, see Managing configurations using Puppet integration . Prerequisites Puppet must be enabled in your Satellite. For more information, see Enabling Puppet Integration with Satellite in Managing configurations using Puppet integration . The host must have a Puppet environment assigned to it. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: C.6. Completing the Red Hat Enterprise Linux 7 image Procedure Update the system: Install the cloud-init packages: Open the /etc/cloud/cloud.cfg configuration file: Under the heading cloud_init_modules , add: The resolv-conf option automatically configures the resolv.conf when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain and other options. Open the /etc/sysconfig/network file: Add the following line to avoid problems accessing the EC2 metadata service: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: On your Red Hat Enterprise Linux Workstation, connect to the terminal as the root user and navigate to the /var/lib/libvirt/images/ directory: Reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel7-cloud.qcow2 file in the location where you enter the command. C.7. Completing the Red Hat Enterprise Linux 6 image Procedure Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and under cloud_init_modules add: The resolv-conf option automatically configures the resolv.conf configuration file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. To prevent network issues, create the /etc/udev/rules.d/75-persistent-net-generator.rules file as follows: This prevents /etc/udev/rules.d/70-persistent-net.rules file from being created. If /etc/udev/rules.d/70-persistent-net.rules is created, networking might not function properly when booting from snapshots (the network interface is created as "eth1" rather than "eth0" and IP address is not assigned). Add the following line to /etc/sysconfig/network to avoid problems accessing the EC2 metadata service: Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it: Power off the instance: On your Red Hat Enterprise Linux Workstation, log in as root and reset and clean the image using the virt-sysprep command so it can be used to create instances without issues: Reduce image size using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This creates a new rhel6-cloud.qcow2 file in the location where you enter the command. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. C.7.1. steps Repeat the procedures for every image that you want to provision with Satellite. Move the image to the location where you want to store for future use. C.8. steps Repeat the procedures for every image that you want to provision with Satellite. Move the image to the location where you want to store for future use.
[ "yum install virt-manager virt-viewer libvirt qemu-kvm", "yum install virt-install libguestfs-tools-c", "systemctl enable --now chronyd", "chkconfig --add ntpd chkconfig ntpd on service ntpd start", "cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors", "update-ca-trust", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "yum update", "yum install cloud-utils-growpart cloud-init", "vi /etc/cloud/cloud.cfg", "- resolv-conf", "vi /etc/sysconfig/network", "NOZEROCONF=yes", "subscription-manager repos --disable=* subscription-manager unregister", "poweroff", "cd /var/lib/libvirt/images/", "virt-sysprep -d rhel7", "virt-sparsify --compress rhel7.qcow2 rhel7-cloud.qcow2", "yum update", "yum install cloud-utils-growpart cloud-init", "- resolv-conf", "echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules", "NOZEROCONF=yes", "subscription-manager repos --disable=* subscription-manager unregister yum clean all", "poweroff", "virt-sysprep -d rhel6", "virt-sparsify --compress rhel6.qcow2 rhel6-cloud.qcow2" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/building_cloud_images_provisioning
Chapter 22. ProjectHelmChartRepository [helm.openshift.io/v1beta1]
Chapter 22. ProjectHelmChartRepository [helm.openshift.io/v1beta1] Description ProjectHelmChartRepository holds namespace-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 22.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the namespace.. 22.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 22.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description basicAuthConfig object basicAuthConfig is an optional reference to a secret by name that contains the basic authentication credentials to present when connecting to the server. The key "username" is used locate the username. The key "password" is used to locate the password. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this configmap must be same as the namespace where the project helm chart repository is getting instantiated. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. url string Chart repository URL 22.1.3. .spec.connectionConfig.basicAuthConfig Description basicAuthConfig is an optional reference to a secret by name that contains the basic authentication credentials to present when connecting to the server. The key "username" is used locate the username. The key "password" is used to locate the password. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 22.1.4. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this configmap must be same as the namespace where the project helm chart repository is getting instantiated. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 22.1.5. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 22.1.6. .status Description Observed status of the repository within the namespace.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 22.1.7. .status.conditions Description conditions is a list of conditions and their statuses Type array 22.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 22.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/projecthelmchartrepositories GET : list objects of kind ProjectHelmChartRepository /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories DELETE : delete collection of ProjectHelmChartRepository GET : list objects of kind ProjectHelmChartRepository POST : create a ProjectHelmChartRepository /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name} DELETE : delete a ProjectHelmChartRepository GET : read the specified ProjectHelmChartRepository PATCH : partially update the specified ProjectHelmChartRepository PUT : replace the specified ProjectHelmChartRepository /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name}/status GET : read status of the specified ProjectHelmChartRepository PATCH : partially update status of the specified ProjectHelmChartRepository PUT : replace status of the specified ProjectHelmChartRepository 22.2.1. /apis/helm.openshift.io/v1beta1/projecthelmchartrepositories HTTP method GET Description list objects of kind ProjectHelmChartRepository Table 22.1. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepositoryList schema 401 - Unauthorized Empty 22.2.2. /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories HTTP method DELETE Description delete collection of ProjectHelmChartRepository Table 22.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ProjectHelmChartRepository Table 22.3. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectHelmChartRepository Table 22.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.5. Body parameters Parameter Type Description body ProjectHelmChartRepository schema Table 22.6. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 201 - Created ProjectHelmChartRepository schema 202 - Accepted ProjectHelmChartRepository schema 401 - Unauthorized Empty 22.2.3. /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name} Table 22.7. Global path parameters Parameter Type Description name string name of the ProjectHelmChartRepository HTTP method DELETE Description delete a ProjectHelmChartRepository Table 22.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 22.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ProjectHelmChartRepository Table 22.10. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ProjectHelmChartRepository Table 22.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.12. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ProjectHelmChartRepository Table 22.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.14. Body parameters Parameter Type Description body ProjectHelmChartRepository schema Table 22.15. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 201 - Created ProjectHelmChartRepository schema 401 - Unauthorized Empty 22.2.4. /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name}/status Table 22.16. Global path parameters Parameter Type Description name string name of the ProjectHelmChartRepository HTTP method GET Description read status of the specified ProjectHelmChartRepository Table 22.17. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ProjectHelmChartRepository Table 22.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.19. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ProjectHelmChartRepository Table 22.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.21. Body parameters Parameter Type Description body ProjectHelmChartRepository schema Table 22.22. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 201 - Created ProjectHelmChartRepository schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/projecthelmchartrepository-helm-openshift-io-v1beta1
CLI Guide
CLI Guide Migration Toolkit for Applications 7.1 Learn how to use the Migration Toolkit for Applications CLI to migrate your applications. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/index
14.5.9. Setting Network Interface Bandwidth Parameters
14.5.9. Setting Network Interface Bandwidth Parameters domiftune sets the guest virtual machine's network interface bandwidth parameters. The following format should be used: The only required parameter is the domain name and interface device of the guest virtual machine, the --config , --live , and --current functions the same as in Section 14.19, "Setting Schedule Parameters" . If no limit is specified, it will query current network interface setting. Otherwise, alter the limits with the following options: <interface-device> This is mandatory and it will set or query the domain's network interface's bandwidth parameters. interface-device can be the interface's target name (<target dev='name'/>), or the MAC address. If no --inbound or --outbound is specified, this command will query and show the bandwidth settings. Otherwise, it will set the inbound or outbound bandwidth. average,peak,burst is the same as in attach-interface command. Refer to Section 14.3, "Attaching Interface Devices"
[ "virsh domiftune domain interface-device [[--config] [--live] | [--current]] [--inbound average,peak,burst] [--outbound average,peak,burst]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-setting_network_interface_bandwidth_parameters
Chapter 14. Deploying machine health checks
Chapter 14. Deploying machine health checks You can configure and deploy a machine health check to automatically repair damaged machines in a machine pool. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 14.1. About machine health checks Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 14.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources About listing all the nodes in a cluster Short-circuiting machine health check remediation About the Control Plane Machine Set Operator 14.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 14.2.1. Short-circuiting machine health check remediation Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 14.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 14.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 14.3. Creating a machine health check resource You can create a MachineHealthCheck resource for machine sets in your cluster. Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml You can configure and deploy a machine health check to detect and repair unhealthy bare metal nodes. 14.4. About power-based remediation of bare metal In a bare metal cluster, remediation of nodes is critical to ensuring the overall health of the cluster. Physically remediating a cluster can be challenging and any delay in putting the machine into a safe or an operational state increases the time the cluster remains in a degraded state, and the risk that subsequent failures might bring the cluster offline. Power-based remediation helps counter such challenges. Instead of reprovisioning the nodes, power-based remediation uses a power controller to power off an inoperable node. This type of remediation is also called power fencing. OpenShift Container Platform uses the MachineHealthCheck controller to detect faulty bare metal nodes. Power-based remediation is fast and reboots faulty nodes instead of removing them from the cluster. Power-based remediation provides the following capabilities: Allows the recovery of control plane nodes Reduces the risk of data loss in hyperconverged environments Reduces the downtime associated with recovering physical machines 14.4.1. MachineHealthChecks on bare metal Machine deletion on bare metal cluster triggers reprovisioning of a bare metal host. Usually bare metal reprovisioning is a lengthy process, during which the cluster is missing compute resources and applications might be interrupted. There are two ways to change the default remediation process from machine deletion to host power-cycle: Annotate the MachineHealthCheck resource with the machine.openshift.io/remediation-strategy: external-baremetal annotation. Create a Metal3RemediationTemplate resource, and refer to it in the spec.remediationTemplate of the MachineHealthCheck . After using one of these methods, unhealthy machines are power-cycled by using Baseboard Management Controller (BMC) credentials. 14.4.2. Understanding the annotation-based remediation process The remediation process operates as follows: The MachineHealthCheck (MHC) controller detects that a node is unhealthy. The MHC notifies the bare metal machine controller which requests to power-off the unhealthy node. After the power is off, the node is deleted, which allows the cluster to reschedule the affected workload on other nodes. The bare metal machine controller requests to power on the node. After the node is up, the node re-registers itself with the cluster, resulting in the creation of a new node. After the node is recreated, the bare metal machine controller restores the annotations and labels that existed on the unhealthy node before its deletion. Note If the power operations did not complete, the bare metal machine controller triggers the reprovisioning of the unhealthy node unless this is a control plane node or a node that was provisioned externally. 14.4.3. Understanding the metal3-based remediation process The remediation process operates as follows: The MachineHealthCheck (MHC) controller detects that a node is unhealthy. The MHC creates a metal3 remediation custom resource for the metal3 remediation controller, which requests to power-off the unhealthy node. After the power is off, the node is deleted, which allows the cluster to reschedule the affected workload on other nodes. The metal3 remediation controller requests to power on the node. After the node is up, the node re-registers itself with the cluster, resulting in the creation of a new node. After the node is recreated, the metal3 remediation controller restores the annotations and labels that existed on the unhealthy node before its deletion. Note If the power operations did not complete, the metal3 remediation controller triggers the reprovisioning of the unhealthy node unless this is a control plane node or a node that was provisioned externally. 14.4.4. Creating a MachineHealthCheck resource for bare metal Prerequisites The OpenShift Container Platform is installed using installer-provisioned infrastructure (IPI). Access to BMC credentials (or BMC access to each node). Network access to the BMC interface of the unhealthy node. Procedure Create a healthcheck.yaml file that contains the definition of your machine health check. Apply the healthcheck.yaml file to your cluster using the following command: USD oc apply -f healthcheck.yaml Sample MachineHealthCheck resource for bare metal, annotation-based remediation apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: "Ready" timeout: "300s" 6 status: "False" - type: "Ready" timeout: "300s" 7 status: "Unknown" maxUnhealthy: "40%" 8 nodeStartupTimeout: "10m" 9 1 Specify the name of the machine health check to deploy. 2 For bare metal clusters, you must include the machine.openshift.io/remediation-strategy: external-baremetal annotation in the annotations section to enable power-cycle remediation. With this remediation strategy, unhealthy hosts are rebooted instead of removed from the cluster. 3 4 Specify a label for the machine pool that you want to check. 5 Specify the compute machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 6 7 Specify the timeout duration for the node condition. If the condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 8 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 9 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. Sample MachineHealthCheck resource for bare metal, metal3-based remediation apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: "Ready" timeout: "300s" Sample Metal3RemediationTemplate resource for bare metal, metal3-based remediation apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s Note The matchLabels are examples only; you must map your machine groups based on your specific needs. The annotations section does not apply to metal3-based remediation. Annotation-based remediation and metal3-based remediation are mutually exclusive. 14.4.5. Troubleshooting issues with power-based remediation To troubleshoot an issue with power-based remediation, verify the following: You have access to the BMC. BMC is connected to the control plane node that is responsible for running the remediation task.
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: \"Ready\" timeout: \"300s\"", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/deploying-machine-health-checks