title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 5. HardwareData [metal3.io/v1alpha1]
Chapter 5. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HardwareDataSpec defines the desired state of HardwareData 5.1.1. .spec Description HardwareDataSpec defines the desired state of HardwareData Type object Property Type Description hardware object The hardware discovered on the host during its inspection. 5.1.2. .spec.hardware Description The hardware discovered on the host during its inspection. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 5.1.3. .spec.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 5.1.4. .spec.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 5.1.5. .spec.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 5.1.6. .spec.hardware.nics Description Type array 5.1.7. .spec.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN 5.1.8. .spec.hardware.nics[].vlans Description The VLANs available Type array 5.1.9. .spec.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 5.1.10. .spec.hardware.storage Description Type array 5.1.11. .spec.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description alternateNames array (string) A list of alternate Linux device names of the disk, e.g. "/dev/sda". Note that this list is not exhaustive, and names may not be stable across reboots. hctl string The SCSI location of the device model string Hardware model name string A Linux device name of the disk, e.g. "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". This will be a name that is stable across reboots if one is available. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 5.1.12. .spec.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 5.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hardwaredata GET : list objects of kind HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata DELETE : delete collection of HardwareData GET : list objects of kind HardwareData POST : create a HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} DELETE : delete a HardwareData GET : read the specified HardwareData PATCH : partially update the specified HardwareData PUT : replace the specified HardwareData 5.2.1. /apis/metal3.io/v1alpha1/hardwaredata HTTP method GET Description list objects of kind HardwareData Table 5.1. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty 5.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata HTTP method DELETE Description delete collection of HardwareData Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HardwareData Table 5.3. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty HTTP method POST Description create a HardwareData Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body HardwareData schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 202 - Accepted HardwareData schema 401 - Unauthorized Empty 5.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} Table 5.7. Global path parameters Parameter Type Description name string name of the HardwareData HTTP method DELETE Description delete a HardwareData Table 5.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HardwareData Table 5.10. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HardwareData Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.12. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HardwareData Table 5.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.14. Body parameters Parameter Type Description body HardwareData schema Table 5.15. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/provisioning_apis/hardwaredata-metal3-io-v1alpha1
5.3.9.3. /proc/sys/kernel/
5.3.9.3. /proc/sys/kernel/ This directory contains a variety of different configuration files that directly affect the operation of the kernel. Some of the most important files include: acct - Controls the suspension of process accounting based on the percentage of free space available on the file system containing the log. By default, the file looks like the following: The first value dictates the percentage of free space required for logging to resume, while the second value sets the threshold percentage of free space when logging is suspended. The third value sets the interval, in seconds, that the kernel polls the file system to see if logging should be suspended or resumed. cap-bound - Controls the capability bounding settings, which provides a list of capabilities for any process on the system. If a capability is not listed here, then no process, no matter how privileged, can do it. The idea is to make the system more secure by ensuring that certain things cannot happen, at least beyond a certain point in the boot process. For a valid list of values for this virtual file, refer to the following installed documentation: /lib/modules/ <kernel-version> /build/include/linux/capability.h . ctrl-alt-del - Controls whether Ctrl + Alt + Delete gracefully restarts the computer using init ( 0 ) or forces an immediate reboot without syncing the dirty buffers to disk ( 1 ). domainname - Configures the system domain name, such as example.com . exec-shield - Configures the Exec Shield feature of the kernel. Exec Shield provides protection against certain types of buffer overflow attacks. There are two possible values for this virtual file: 0 - Disables Exec Shield. 1 - Enables Exec Shield. This is the default value. Important If a system is running security-sensitive applications that were started while Exec Shield was disabled, these applications must be restarted when Exec Shield is enabled in order for Exec Shield to take effect. exec-shield-randomize - Enables location randomization of various items in memory. This helps deter potential attackers from locating programs and daemons in memory. Each time a program or daemon starts, it is put into a different memory location each time, never in a static or absolute memory address. There are two possible values for this virtual file: 0 - Disables randomization of Exec Shield. This may be useful for application debugging purposes. 1 - Enables randomization of Exec Shield. This is the default value. Note: The exec-shield file must also be set to 1 for exec-shield-randomize to be effective. hostname - Configures the system hostname, such as www.example.com . hotplug - Configures the utility to be used when a configuration change is detected by the system. This is primarily used with USB and Cardbus PCI. The default value of /sbin/hotplug should not be changed unless testing a new program to fulfill this role. modprobe - Sets the location of the program used to load kernel modules. The default value is /sbin/modprobe which means kmod calls it to load the module when a kernel thread calls kmod . msgmax - Sets the maximum size of any message sent from one process to another and is set to 8192 bytes by default. Be careful when raising this value, as queued messages between processes are stored in non-swappable kernel memory. Any increase in msgmax would increase RAM requirements for the system. msgmnb - Sets the maximum number of bytes in a single message queue. The default is 16384 . msgmni - Sets the maximum number of message queue identifiers. The default is 16 . osrelease - Lists the Linux kernel release number. This file can only be altered by changing the kernel source and recompiling. ostype - Displays the type of operating system. By default, this file is set to Linux , and this value can only be changed by changing the kernel source and recompiling. overflowgid and overflowuid - Defines the fixed group ID and user ID, respectively, for use with system calls on architectures that only support 16-bit group and user IDs. panic - Defines the number of seconds the kernel postpones rebooting when the system experiences a kernel panic. By default, the value is set to 0 , which disables automatic rebooting after a panic. printk - This file controls a variety of settings related to printing or logging error messages. Each error message reported by the kernel has a loglevel associated with it that defines the importance of the message. The loglevel values break down in this order: 0 - Kernel emergency. The system is unusable. 1 - Kernel alert. Action must be taken immediately. 2 - Condition of the kernel is considered critical. 3 - General kernel error condition. 4 - General kernel warning condition. 5 - Kernel notice of a normal but significant condition. 6 - Kernel informational message. 7 - Kernel debug-level messages. Four values are found in the printk file: Each of these values defines a different rule for dealing with error messages. The first value, called the console loglevel , defines the lowest priority of messages printed to the console. (Note that, the lower the priority, the higher the loglevel number.) The second value sets the default loglevel for messages without an explicit loglevel attached to them. The third value sets the lowest possible loglevel configuration for the console loglevel. The last value sets the default value for the console loglevel. random/ directory - Lists a number of values related to generating random numbers for the kernel. rtsig-max - Configures the maximum number of POSIX real-time signals that the system may have queued at any one time. The default value is 1024 . rtsig-nr - Lists the current number of POSIX real-time signals queued by the kernel. sem - Configures semaphore settings within the kernel. A semaphore is a System V IPC object that is used to control utilization of a particular process. shmall - Sets the total amount of shared memory that can be used at one time on the system, in pages. By default, this value is 2097152 . shmmax - Sets the largest shared memory segment size allowed by the kernel, in bytes. By default, this value is 33554432 . However, the kernel supports much larger values than this. shmmni - Sets the maximum number of shared memory segments for the whole system, in bytes. By default, this value is 4096 sysrq - Activates the System Request Key, if this value is set to anything other than zero ( 0 ), the default. The System Request Key allows immediate input to the kernel through simple key combinations. For example, the System Request Key can be used to immediately shut down or restart a system, sync all mounted file systems, or dump important information to the console. To initiate a System Request Key, type Alt + SysRq + <system request code> . Replace <system request code> with one of the following system request codes: r - Disables raw mode for the keyboard and sets it to XLATE (a limited keyboard mode which does not recognize modifiers such as Alt , Ctrl , or Shift for all keys). k - Kills all processes active in a virtual console. Also called Secure Access Key ( SAK ), it is often used to verify that the login prompt is spawned from init and not a trojan copy designed to capture usernames and passwords. b - Reboots the kernel without first unmounting file systems or syncing disks attached to the system. c - Crashes the system without first unmounting file systems or syncing disks attached to the system. o - Shuts off the system. s - Attempts to sync disks attached to the system. u - Attempts to unmount and remount all file systems as read-only. p - Outputs all flags and registers to the console. t - Outputs a list of processes to the console. m - Outputs memory statistics to the console. 0 through 9 - Sets the log level for the console. e - Kills all processes except init using SIGTERM. i - Kills all processes except init using SIGKILL. l - Kills all processes using SIGKILL (including init ). The system is unusable after issuing this System Request Key code. h - Displays help text. This feature is most beneficial when using a development kernel or when experiencing system freezes. Warning The System Request Key feature is considered a security risk because an unattended console provides an attacker with access to the system. For this reason, it is turned off by default. Refer to /usr/share/doc/kernel-doc- <version> /Documentation/sysrq.txt for more information about the System Request Key. sysrq-key - Defines the key code for the System Request Key ( 84 is the default). sysrq-sticky - Defines whether the System Request Key is a chorded key combination. The accepted values are as follows: 0 - Alt + SysRq and the system request code must be pressed simultaneously. This is the default value. 1 - Alt + SysRq must be pressed simultaneously, but the system request code can be pressed anytime before the number of seconds specified in /proc/sys/kernel/sysrq-timer elapses. sysrq-timer - Specifies the number of seconds allowed to pass before the system request code must be pressed. The default value is 10 . tainted - Indicates whether a non-GPL module is loaded. 0 - No non-GPL modules are loaded. 1 - At least one module without a GPL license (including modules with no license) is loaded. 2 - At least one module was force-loaded with the command insmod -f . threads-max - Sets the maximum number of threads to be used by the kernel, with a default value of 2048 . version - Displays the date and time the kernel was last compiled. The first field in this file, such as #3 , relates to the number of times a kernel was built from the source base.
[ "4 2 30", "6 4 1 7" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-proc-sys-kernel
Chapter 5. Installation
Chapter 5. Installation Troubleshoot issues with your installation. 5.1. Issue - Cannot locate certain packages that come bundled with the Ansible Automation Platform installer You cannot locate certain packages that come bundled with the Ansible Automation Platform installer, or you are seeing a "Repositories disabled by configuration" message. To resolve this issue, enable the repository by using the subscription-manager command in the command line. For more information about resolving this issue, see the Troubleshooting section of Attaching your Red Hat Ansible Automation Platform subscription in Access management and authentication .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/troubleshoot-installation
Chapter 5. What to do next? Day 2
Chapter 5. What to do ? Day 2 As a storage administrator, once you have installed and configured Red Hat Ceph Storage 7, you are ready to perform "Day Two" operations for your storage cluster. These operations include adding metadata servers (MDS) and object gateways (RGW), and configuring services such as NFS. For more information about how to use the cephadm orchestrator to perform "Day Two" operations, refer to the Red Hat Ceph Storage 7 Operations Guide . To deploy, configure, and administer the Ceph Object Gateway on "Day Two" operations, refer to the Red Hat Ceph Storage 7 Object Gateway Guide .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/installation_guide/what-to-do-next
22.3. GNOME Boxes
22.3. GNOME Boxes Boxes is a lightweight graphical desktop virtualization tool used to view and access virtual machines and remote systems. Unlike virt-viewer and remote-viewer , Boxes allows viewing guest virtual machines, but also creating and configuring them, similar to virt-manager . However, in comparison with virt-manager , Boxes offers fewer management options and features, but is easier to use. To install Boxes , run: Open Boxes through Applications ⇒ System Tools . The main screen shows the available guest virtual machines. The right side of the screen has two buttons: - the search button, to search for guest virtual machines by name, and - the selection button. Clicking the selection button allows you to select one or more guest virtual machines in order to perform operations individually or as a group. The available operations are shown at the bottom of the screen on the operations bar: Figure 22.3. The Operations Bar There are four operations that can be performed: Favorite : Adds a heart to selected guest virtual machines and moves them to the top of the list of guests. This becomes increasingly helpful as the number of guests grows. Pause : The selected guest virtual machines will stop running. Delete : Removes selected guest virtual machines. Properties : Shows the properties of the selected guest virtual machine. Create new guest virtual machines using the New button on the left side of the main screen. Procedure 22.1. Creating a new guest virtual machine with Boxes Click New This opens the Introduction screen. Click Continue . Figure 22.4. Introduction screen Select source The Source Selection screen has three options: Available media: Any immediately available installation media will be shown here. Clicking any of these will take you directly to the Review screen. Enter a URL : Type in a URL to specify a local URI or path to an ISO file. This can also be used to access a remote machine. The address should follow the pattern of protocol :// IPaddress ? port ; , for example: The protocols can be spice:// , qemu:// , or vnc:// Select a file : Open a file directory to search for installation media manually. Figure 22.5. Source Selection screen Review the details The Review screen shows the details of the guest virtual machine. Figure 22.6. Review screen These details can be left as is, in which case proceed to the final step, or: Optional: customize the details clicking Customize allows you to adjust the configuration of the guest virtual machine, such as the memory and disk size. Figure 22.7. Customization screen Create Click Create . The new guest virtual machine will open.
[ "yum install gnome-boxes", "spice://192.168.122.1?port=5906;" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-graphic_user_interface_tools_for_guest_virtual_machine_management-gnome_boxes
5.7. Securing Sendmail
5.7. Securing Sendmail Sendmail is a Mail Transport Agent (MTA) that uses the Simple Mail Transport Protocol (SMTP) to deliver electronic messages between other MTAs and to email clients or delivery agents. Although many MTAs are capable of encrypting traffic between one another, most do not, so sending email over any public networks is considered an inherently insecure form of communication. For more information about how email works and an overview of common configuration settings, refer to the chapter titled Email in the Reference Guide . This section assumes a basic knowledge of how to generate a valid /etc/mail/sendmail.cf by editing the /etc/mail/sendmail.mc and running the m4 command as explained in the Reference Guide . It is recommended that anyone planning to implement a Sendmail server address the following issues. 5.7.1. Limiting a Denial of Service Attack Because of the nature of email, a determined attacker can flood the server with mail fairly easily and cause a denial of service. By setting limits to the following directives in /etc/mail/sendmail.mc , the effectiveness of such attacks are limited. confCONNECTION_RATE_THROTTLE - The number of connections the server can receive per second. By default, Sendmail does not limit the number of connections. If a limit is set and reached, further connections are delayed. confMAX_DAEMON_CHILDREN - The maximum number of child processes that can be spawned by the server. By default, Sendmail does not assign a limit to the number of child processes. If a limit is set and reached, further connections are delayed. confMIN_FREE_BLOCKS - The minimum number of free blocks which must be available for the server to accept mail. The default is 100 blocks. confMAX_HEADERS_LENGTH - The maximum acceptable size (in bytes) for a message header. confMAX_MESSAGE_SIZE - The maximum acceptable size (in bytes) for any one message.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-server-mail
Chapter 17. Impersonating the system:admin user
Chapter 17. Impersonating the system:admin user 17.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 17.2. Impersonating the system:admin user You can grant a user permission to impersonate system:admin , which grants them cluster administrator permissions. Procedure To grant a user permission to impersonate system:admin , run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username> Tip You can alternatively apply the following YAML to grant permission to impersonate system:admin : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username> 17.3. Impersonating the system:admin group When a system:admin user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2> parameters in the command to impersonate the associated groups. Procedure To grant a user permission to impersonate a system:admin by impersonating the associated cluster administration groups, run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \ --as-group=<group1> --as-group=<group2> 17.4. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml
[ "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated", "oc apply -f add-<cluster_role>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/impersonating-system-admin
Chapter 6. opm CLI
Chapter 6. opm CLI 6.1. Installing the opm CLI 6.1.1. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging format for more information about the bundle format. To create a bundle image using the Operator SDK, see Working with bundle images . 6.1.2. Installing the opm CLI You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages. RHEL 8 meets these requirements: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version 6.1.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning catalogs. 6.2. opm CLI reference The opm command-line interface (CLI) is a tool for creating and maintaining Operator catalogs. opm CLI syntax USD opm <command> [<subcommand>] [<argument>] [<flags>] Table 6.1. Global flags Flag Description --skip-tls Skip TLS certificate verification for container image registries while pulling bundles or indexes. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 6.2.1. index Generate Operator index container images from pre-existing Operator bundles. Command syntax USD opm index <subcommand> [<flags>] Table 6.2. index subcommands Subcommand Description add Add Operator bundles to an index. prune Prune an index of all but specified packages. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. rm Delete an entire Operator from an index. 6.2.1.1. add Add Operator bundles to an index. Command syntax USD opm index add [<flags>] Table 6.3. index add flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -b , --bundles (strings) Comma-separated list of bundles to add. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to add to. --generate If enabled, only creates the Dockerfile and saves it to local disk. --mode (string) Graph update mode that defines how channel graphs are updated: replaces (the default value), semver , or semver-skippatch . -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 6.2.1.2. prune Prune an index of all but specified packages. Command syntax USD opm index prune [<flags>] Table 6.4. index prune flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 6.2.1.3. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. Command syntax USD opm index prune-stranded [<flags>] Table 6.5. index prune-stranded flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 6.2.1.4. rm Delete an entire Operator from an index. Command syntax USD opm index rm [<flags>] Table 6.6. index rm flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to delete from. --generate If enabled, only creates the Dockerfile and saves it to local disk. -o , --operators (strings) Comma-separated list of Operators to delete. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 6.2.2. init Generate an olm.package declarative config blob. Command syntax USD opm init <package_name> [<flags>] Table 6.7. init flags Flag Description -c , --default-channel (string) The channel that subscriptions will default to if unspecified. -d , --description (string) Path to the Operator's README.md or other documentation. -i , --icon (string) Path to package's icon. -o , --output (string) Output format: json (the default value) or yaml . 6.2.3. render Generate a declarative config blob from the provided index images, bundle images, and SQLite database files. Command syntax USD opm render <index_image | bundle_image | sqlite_file> [<flags>] Table 6.8. render flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 6.2.4. validate Validate the declarative config JSON file(s) in a given directory. Command syntax USD opm validate <directory> [<flags>] 6.2.5. serve Serve declarative configs via a GRPC server. Note The declarative config directory is loaded by the serve command at startup. Changes made to the declarative config after this command starts are not reflected in the served content. Command syntax USD opm serve <source_path> [<flags>] Table 6.9. serve flags Flag Description --debug Enable debug logging. -p , --port (string) Port number to serve on. Default: 50051 . -t , --termination-log (string) Path to a container termination log file. Default: /dev/termination-log .
[ "tar xvf <file>", "echo USDPATH", "sudo mv ./opm /usr/local/bin/", "C:\\> path", "C:\\> move opm.exe <directory>", "opm version", "opm <command> [<subcommand>] [<argument>] [<flags>]", "opm index <subcommand> [<flags>]", "opm index add [<flags>]", "opm index prune [<flags>]", "opm index prune-stranded [<flags>]", "opm index rm [<flags>]", "opm init <package_name> [<flags>]", "opm render <index_image | bundle_image | sqlite_file> [<flags>]", "opm validate <directory> [<flags>]", "opm serve <source_path> [<flags>]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cli_tools/opm-cli
Chapter 19. Migration of a DRL service to a Red Hat build of Kogito microservice
Chapter 19. Migration of a DRL service to a Red Hat build of Kogito microservice You can build and deploy a sample project in Red Hat build of Kogito to expose a stateless rules evaluation of the decision engine in a Red Hat build of Quarkus REST endpoint, and migrate the REST endpoint to Red Hat build of Kogito. The stateless rule evaluation is a single execution of a rule set in Red Hat Decision Manager and can be identified as a function invocation. In the invoked function, the output values are determined using the input values. Also, the invoked function uses the decision engine to perform the jobs. Therefore, in such cases, a function is exposed using a REST endpoint and converted into a microservice. After converting into a microservice, a function is deployed into a Function as a Service environment to eliminate the cost of JVM startup time. 19.1. Major changes and migration considerations The following table describes the major changes and features that affect migration from the KIE Server API and KJAR to Red Hat build of Kogito deployments: Table 19.1. DRL migration considerations Feature In KIE Server API In Red Hat build of Kogito with legacy API support In Red Hat build of Kogito artifact DRL files stored in src/main/resources folder of KJAR. copy as is to src/main/resources folder. rewrite using the rule units and OOPath. KieContainer configured using a system property or kmodule.xml file. replaced by KieRuntimeBuilder . not required. KieBase or KieSession configured using a system property or kmodule.xml file. configured using a system property or kmodule.xml file. replaced by rule units. 19.2. Migration strategy In Red Hat Decision Manager, you can migrate a rule evaluation to a Red Hat build of Kogito deployment in the following two ways: Using legacy API in Red Hat build of Kogito In Red Hat build of Kogito, the kogito-legacy-api module makes the legacy API of Red Hat Decision Manager available; therefore, the DRL files remain unchanged. This approach of migrating rule evaluation requires minimal changes and enables you to use major Red Hat build of Quarkus features, such as hot reload and native image creation. Migrating to Red Hat build of Kogito rule units Migrating to Red Hat build of Kogito rule units include the programming model of Red Hat build of Kogito, which is based on the concept of rule units. A rule unit in Red Hat build of Kogito includes both a set of rules and the facts, against which the rules are matched. Rule units in Red Hat build of Kogito also come with data sources. A rule unit data source is a source of the data processed by a given rule unit and represents the entry point, which is used to evaluate the rule unit. Rule units use two types of data sources: DataStream : This is an append-only data source and the facts added into the DataStream cannot be updated or removed. DataStore : This data source is for modifiable data. You can update or remove an object using the FactHandle that is returned when the object is added into the DataStore . Overall, a rule unit contains two parts: The definition of the fact to be evaluated and the set of rules evaluating the facts. 19.3. Example loan application project In the following sections, a loan application project is used as an example to migrate a DRL project to Red Hat build of Kogito deployments. The domain model of the loan application project is made of two classes, the LoanApplication class and the Applicant class: Example LoanApplication class public class LoanApplication { private String id; private Applicant applicant; private int amount; private int deposit; private boolean approved = false; public LoanApplication(String id, Applicant applicant, int amount, int deposit) { this.id = id; this.applicant = applicant; this.amount = amount; this.deposit = deposit; } } Example Applicant class public class Applicant { private String name; private int age; public Applicant(String name, int age) { this.name = name; this.age = age; } } The rule set is created using business decisions to approve or reject an application, along with the last rule of collecting all the approved applications in a list. Example rule set in loan application 19.3.1. Exposing rule evaluation with a REST endpoint using Red Hat build of Quarkus You can expose the rule evaluation that is developed in Business Central with a REST endpoint using Red Hat build of Quarkus. Procedure Create a new module based on the module that contains the rules and Quarkus libraries, providing the REST support: Example dependencies for creating a new module Create a REST endpoint. The following is an example setup for creating a REST endpoint: Example FindApprovedLoansEndpoint endpoint setup @Path("/find-approved") public class FindApprovedLoansEndpoint { private static final KieContainer kContainer = KieServices.Factory.get().newKieClasspathContainer(); @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kContainer.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal("approvedApplications", approvedApplications); session.setGlobal("maxAmount", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } } In the example, a KieContainer containing the rules is created and added into a static field. The rules in the KieContainer are obtained from the other module in the class path. Using this approach, you can reuse the same KieContainer for subsequent invocations related to the FindApprovedLoansEndpoint endpoint without recompiling the rules. Note The two modules are consolidated in the process of migrating rule units to a Red Hat build of Kogito microservice using legacy API. For more information, see Migrating DRL rules units to Red Hat build of Kogito microservice using legacy API . When the FindApprovedLoansEndpoint endpoint is invoked, a new KieSession is created from the KieContainer . The KieSession is populated with the objects from LoanAppDto resulting from the unmarshalling of a JSON request. Example LoanAppDto class public class LoanAppDto { private int maxAmount; private List<LoanApplication> loanApplications; public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } public List<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(List<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } } When the fireAllRules() method is called, KieSession is fired and the business logic is evaluated against the input data. After business logic evaluation, the last rule collects all the approved applications in a list and the same list is returned as an output. Start the Red Hat build of Quarkus application. Invoke the FindApprovedLoansEndpoint endpoint with a JSON request that contains the loan applications to be checked. The value of the maxAmount is used in the rules as shown in the following example: Example curl request Example JSON response [ { "id": "ABC10001", "applicant": { "name": "John", "age": 45 }, "amount": 2000, "deposit": 1000, "approved": true } ] Note Using this approach, you cannot use the hot reload feature and cannot create a native image of the project. In the steps, the missing Quarkus features are provided by the Kogito extension that enables Quarkus aware of the DRL files and implement the hot reload feature in a similar way. 19.3.2. Migrating a rule evaluation to a Red Hat build of Kogito microservice using legacy API After exposing a rule evaluation with a REST endpoint, you can migrate the rule evaluation to a Red Hat build of Kogito microservice using legacy API. Procedure Add the following dependencies to the project pom.xml file to enable the use of Red Hat build of Quarkus and legacy API: Example dependencies for using Quarkus and legacy API Rewrite the REST endpoint implementation: Example REST endpoint implementation @Path("/find-approved") public class FindApprovedLoansEndpoint { @Inject KieRuntimeBuilder kieRuntimeBuilder; @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kieRuntimeBuilder.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal("approvedApplications", approvedApplications); session.setGlobal("maxAmount", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } } In the rewritten REST endpoint implementation, instead of creating the KieSession from the KieContainer , the KieSession is created automatically using an integrated KieRuntimeBuilder . The KieRuntimeBuilder is an interface provided by the kogito-legacy-api module that replaces the KieContainer . Using KieRuntimeBuilder , you can create KieBases and KieSessions in a similar way you create in KieContainer . Red Hat build of Kogito automatically generates an implementation of KieRuntimeBuilder interface at compile time and integrates the KieRuntimeBuilder into a class, which implements the FindApprovedLoansEndpoint REST endpoint. Start your Red Hat build of Quarkus application in development mode. You can also use the hot reload to make the changes to the rules files that are applied to the running application. Also, you can create a native image of your rule based application. 19.3.3. Implementing rule units and automatic REST endpoint generation After migrating rule units to a Red Hat build of Kogito microservice, you can implement the rule units and automatic generation of the REST endpoint. In Red Hat build of Kogito, a rule unit contains a set of rules and the facts, against which the rules are matched. Rule units in Red Hat build of Kogito also come with data sources. A rule unit data source is a source of the data processed by a given rule unit and represents the entry point, which is used to evaluate the rule unit. Rule units use two types of data sources: DataStream : This is an append-only data source. In DataStream , subscribers receive new and past messages, stream can be hot or cold in the reactive streams. Also, the facts added into the DataStream cannot be updated or removed. DataStore : This data source is for modifiable data. You can update or remove an object using the FactHandle that is returned when the object is added into the DataStore . Overall, a rule unit contains two parts: the definition of the fact to be evaluated and the set of rules evaluating the facts. Procedure Implement a fact definition using POJO: Example implementation of a fact definition using POJO package org.kie.kogito.queries; import org.kie.kogito.rules.DataSource; import org.kie.kogito.rules.DataStore; import org.kie.kogito.rules.RuleUnitData; public class LoanUnit implements RuleUnitData { private int maxAmount; private DataStore<LoanApplication> loanApplications; public LoanUnit() { this(DataSource.createStore(), 0); } public LoanUnit(DataStore<LoanApplication> loanApplications, int maxAmount) { this.loanApplications = loanApplications; this.maxAmount = maxAmount; } public DataStore<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(DataStore<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } } In the example, instead of using LoanAppDto the LoanUnit class is bound directly. LoanAppDto is used to marshall or unmarshall JSON requests. Also, the example implements the org.kie.kogito.rules.RuleUnitData interface and uses a DataStore to contain the loan applications to be approved. The org.kie.kogito.rules.RuleUnitData is a marker interface to notify the decision engine that LoanUnit class is part of a rule unit definition. In addition, the DataStore is responsible to allow the rule engine to react on the changes by firing new rules and triggering other rules. Additionally, the consequences of the rules modify the approved property in the example. On the contrary, the maxAmount value is considered as a configuration parameter for the rule unit, which is not modified. The maxAmount is processed automatically during the rules evaluation and automatically set from the value passed in the JSON requests. Implement a DRL file: Example implementation of a DRL file The DRL file that you create must declare the same package as fact definition implementation and a unit with the same name of the Java class. The Java class implements the RuleUnitData interface to state that the interface belongs to the same rule unit. Also, the DRL file in the example is rewritten using the OOPath expressions. In the DRL file, the data source acts as an entry point and the OOPath expression contains the data source name as root. However, the constraints are added in square brackets as follows: USDl: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount ⇐ maxAmount ] Alternatively, you can use the standard DRL syntax, in which you can specify the data source name as an entry point. However, you need to specify the type of the matched object again as shown in the following example, even if the decision engine can infer the type from the data source: USDl: LoanApplication( applicant.age >= 20, deposit >= 1000, amount ⇐ maxAmount ) from entry-point loanApplications In the example, the last rule that collects all the approved loan applications is replaced by a query that retrieves the list. A rule unit defines the facts to be passed in input to evaluate the rules, and the query defines the expected output from the rule evaluation. Using this approach, Red Hat build of Kogito can automatically generate a class that executes the query and returns the output as shown in the following example: Example LoanUnitQueryFindApproved class public class LoanUnitQueryFindApproved implements org.kie.kogito.rules.RuleUnitQuery<List<org.kie.kogito.queries.LoanApplication>> { private final RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance; public LoanUnitQueryFindApproved(RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance) { this.instance = instance; } @Override public List<org.kie.kogito.queries.LoanApplication> execute() { return instance.executeQuery("FindApproved").stream().map(this::toResult).collect(toList()); } private org.kie.kogito.queries.LoanApplication toResult(Map<String, Object> tuple) { return (org.kie.kogito.queries.LoanApplication) tuple.get("USDl"); } } The following is an example of a REST endpoint that takes a rule unit as input and passing the input to a query executor to return the output: Example LoanUnitQueryFindApprovedEndpoint endpoint @Path("/find-approved") public class LoanUnitQueryFindApprovedEndpoint { @javax.inject.Inject RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit; public LoanUnitQueryFindApprovedEndpoint() { } public LoanUnitQueryFindApprovedEndpoint(RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit) { this.ruleUnit = ruleUnit; } @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<org.kie.kogito.queries.LoanApplication> executeQuery(org.kie.kogito.queries.LoanUnit unit) { RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance = ruleUnit.createInstance(unit); return instance.executeQuery(LoanUnitQueryFindApproved.class); } } Note You can also add multiple queries and for each query, a different REST endpoint is generated. For example, the FindApproved REST endpoint is generated for find-approved.
[ "public class LoanApplication { private String id; private Applicant applicant; private int amount; private int deposit; private boolean approved = false; public LoanApplication(String id, Applicant applicant, int amount, int deposit) { this.id = id; this.applicant = applicant; this.amount = amount; this.deposit = deposit; } }", "public class Applicant { private String name; private int age; public Applicant(String name, int age) { this.name = name; this.age = age; } }", "global Integer maxAmount; global java.util.List approvedApplications; rule LargeDepositApprove when USDl: LoanApplication( applicant.age >= 20, deposit >= 1000, amount <= maxAmount ) then modify(USDl) { setApproved(true) }; // loan is approved end rule LargeDepositReject when USDl: LoanApplication( applicant.age >= 20, deposit >= 1000, amount > maxAmount ) then modify(USDl) { setApproved(false) }; // loan is rejected end // ... more loans approval/rejections business rules rule CollectApprovedApplication when USDl: LoanApplication( approved ) then approvedApplications.add(USDl); // collect all approved loan applications end", "<dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.example</groupId> <artifactId>drools-project</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependencies>", "@Path(\"/find-approved\") public class FindApprovedLoansEndpoint { private static final KieContainer kContainer = KieServices.Factory.get().newKieClasspathContainer(); @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kContainer.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal(\"approvedApplications\", approvedApplications); session.setGlobal(\"maxAmount\", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } }", "public class LoanAppDto { private int maxAmount; private List<LoanApplication> loanApplications; public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } public List<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(List<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } }", "curl -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{\"maxAmount\":5000, \"loanApplications\":[ {\"id\":\"ABC10001\",\"amount\":2000,\"deposit\":1000,\"applicant\":{\"age\":45,\"name\":\"John\"}}, {\"id\":\"ABC10002\",\"amount\":5000,\"deposit\":100,\"applicant\":{\"age\":25,\"name\":\"Paul\"}}, {\"id\":\"ABC10015\",\"amount\":1000,\"deposit\":100,\"applicant\":{\"age\":12,\"name\":\"George\"}} ]}' http://localhost:8080/find-approved", "[ { \"id\": \"ABC10001\", \"applicant\": { \"name\": \"John\", \"age\": 45 }, \"amount\": 2000, \"deposit\": 1000, \"approved\": true } ]", "<dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-quarkus-rules</artifactId> </dependency> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-legacy-api</artifactId> </dependency> </dependencies>", "@Path(\"/find-approved\") public class FindApprovedLoansEndpoint { @Inject KieRuntimeBuilder kieRuntimeBuilder; @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<LoanApplication> executeQuery(LoanAppDto loanAppDto) { KieSession session = kieRuntimeBuilder.newKieSession(); List<LoanApplication> approvedApplications = new ArrayList<>(); session.setGlobal(\"approvedApplications\", approvedApplications); session.setGlobal(\"maxAmount\", loanAppDto.getMaxAmount()); loanAppDto.getLoanApplications().forEach(session::insert); session.fireAllRules(); session.dispose(); return approvedApplications; } }", "package org.kie.kogito.queries; import org.kie.kogito.rules.DataSource; import org.kie.kogito.rules.DataStore; import org.kie.kogito.rules.RuleUnitData; public class LoanUnit implements RuleUnitData { private int maxAmount; private DataStore<LoanApplication> loanApplications; public LoanUnit() { this(DataSource.createStore(), 0); } public LoanUnit(DataStore<LoanApplication> loanApplications, int maxAmount) { this.loanApplications = loanApplications; this.maxAmount = maxAmount; } public DataStore<LoanApplication> getLoanApplications() { return loanApplications; } public void setLoanApplications(DataStore<LoanApplication> loanApplications) { this.loanApplications = loanApplications; } public int getMaxAmount() { return maxAmount; } public void setMaxAmount(int maxAmount) { this.maxAmount = maxAmount; } }", "package org.kie.kogito.queries; unit LoanUnit; // no need to using globals, all variables and facts are stored in the rule unit rule LargeDepositApprove when USDl: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount <= maxAmount ] // oopath style then modify(USDl) { setApproved(true) }; end rule LargeDepositReject when USDl: /loanApplications[ applicant.age >= 20, deposit >= 1000, amount > maxAmount ] then modify(USDl) { setApproved(false) }; end // ... more loans approval/rejections business rules // approved loan applications are now retrieved through a query query FindApproved USDl: /loanApplications[ approved ] end", "public class LoanUnitQueryFindApproved implements org.kie.kogito.rules.RuleUnitQuery<List<org.kie.kogito.queries.LoanApplication>> { private final RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance; public LoanUnitQueryFindApproved(RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance) { this.instance = instance; } @Override public List<org.kie.kogito.queries.LoanApplication> execute() { return instance.executeQuery(\"FindApproved\").stream().map(this::toResult).collect(toList()); } private org.kie.kogito.queries.LoanApplication toResult(Map<String, Object> tuple) { return (org.kie.kogito.queries.LoanApplication) tuple.get(\"USDl\"); } }", "@Path(\"/find-approved\") public class LoanUnitQueryFindApprovedEndpoint { @javax.inject.Inject RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit; public LoanUnitQueryFindApprovedEndpoint() { } public LoanUnitQueryFindApprovedEndpoint(RuleUnit<org.kie.kogito.queries.LoanUnit> ruleUnit) { this.ruleUnit = ruleUnit; } @POST() @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public List<org.kie.kogito.queries.LoanApplication> executeQuery(org.kie.kogito.queries.LoanUnit unit) { RuleUnitInstance<org.kie.kogito.queries.LoanUnit> instance = ruleUnit.createInstance(unit); return instance.executeQuery(LoanUnitQueryFindApproved.class); } }" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/con-migrate-drl-to-kogito-loan-overview_migration-kogito-microservices
Chapter 130. KafkaBridgeProducerSpec schema reference
Chapter 130. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeProducerSpec schema properties Configures producer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 # ... Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 130.1. KafkaBridgeProducerSpec schema properties Property Property type Description config map The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: acks: 1 delivery.timeout.ms: 300000 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeProducerSpec-reference
9.5. Caching Considerations
9.5. Caching Considerations Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 10.1, "JBoss Data Virtualization Settings" ), this section provides some additional information about those settings related to caching. JBoss Data Virtualization settings regarding cache tuning are divided amongst: Resultset Cache Tuning Prepared Plan Cache Tuning Cache statistics can be obtained through the Management Console or AdminShell. The statistics can be used to help tune cache parameters and ensure a hit ratio. Plans are currently fully held in memory and may have a significant memory footprint. When making extensive use of prepared statements and/or virtual procedures, the size of the plan cache may be increased proportionally to number of gigabytes intended for use by JBoss Data Virtualization. While the result cache parameters control the cache result entries (such as max number, and eviction), the result batches themselves are accessed through the buffer manager. If the size of the result cache is increased, you may need to tune the buffer manager configuration to ensure there is enough buffer space. Result set and prepared plan caches have their entries invalidated by data and metadata events. By default, these events are captured by running commands through JBoss Data Virtualization (see the Red Hat JBoss Data Virtualization Developer Guide for further customization). JBoss Data Virtualization stores compiled forms of update plans or trigger actions with the prepared plan so that if metadata changes, the changes may take effect immediately. The default resultset-cache-max-staleness for resultset caching is 60 seconds to improve efficiency with rapidly changing sources. Consider decreasing this value to make the resultset cache more consistent with the underlying data. Even with a setting of 0, full transactional consistency is not guaranteed. Warning Disabling or constraining these caches will lead to poor performance.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/caching_considerations
Chapter 5. Understanding OpenShift Container Platform development
Chapter 5. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 5.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 5.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 5.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 5.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine (also known as the master machine) in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 5.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI by selecting Catalog Developer Catalog , as shown in the following figure: Figure 5.2. Choose S2I base images for apps that need specific runtimes 5.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 5.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 5.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 5.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 5.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.7 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 5.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 5.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 5.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/architecture/understanding-development
Using the AMQ C++ Client
Using the AMQ C++ Client Red Hat AMQ 2021.Q3 For Use with AMQ Clients 2.10
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/index
Chapter 5. View OpenShift Data Foundation Topology
Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_power/viewing-odf-topology_mcg-verify
26.2. DM Multipath
26.2. DM Multipath DM Multipath is a feature that allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths. DM Multipath are used primarily for the following reasons: Redundancy DM Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path fails, DM Multipath switches to an alternate path. Improved Performance DM Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM Multipath can detect loading on the I/O paths and dynamically rebalance the load. For more information, see the Red Hat DM Multipath guide.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/dmmultipath
Chapter 8. Kerberos Support Through GSS API
Chapter 8. Kerberos Support Through GSS API 8.1. Kerberos and Red Hat JBoss Data Virtualization 8.1.1. Introduction to Kerberos Authentication on Red Hat JBoss Data Virtualization Red Hat JBoss Data Virtualization supports Kerberos authentication using the GSS API for single sign-on applications. This service ticket negotiation-based authentication is supported through remote JDBC/ODBC drivers and LocalConnections. The client has to be configured differently for each variant. 8.1.1.1. Local Connection Overview For a local connection, set the JDBC URL property PassthroughAuthentication to true and use JBoss Negotiation to authenticate your web-application with Kerberos. When the web application is authenticated with the provided Kerberos token, it can be used in Red Hat JBoss Data Virtualization. For details about how to configure this, please refer to the JBoss Negotiation documentation. 8.1.1.2. Remote Connections (JDBC/ODBC) Open the standalone.xml file in your text editor. Go to the "security-domains" section and add the following, customizing where necessary for your system. Note You need to configure two separate security domains. Configure one security domain to represent the identity of the server. The first security domain authenticates the container itself to the directory service. It needs to use a login module which accepts some type of static login mechanism, because a real user is not involved. This example uses a static principal and references a keytab file which contains the credential. Configure a second security domain to secure the application. The second security domain is used to authenticate the individual user to the Kerberos server. (You need at least one login module to authenticate the user, and another to search for the roles to apply to the user.) The following XML code shows an example SPNEGO security domain. It includes an authorization module to map roles to individual users. You can also use a module which searches for the roles on the authentication server itself. Note The name of security-domain must match that of the realm. 8.1.1.3. User Roles and Groups Kerberos does not assign any user roles to the authenticated subject. Therefore, you need to configure a separate role-mapping module to do this work. In the example above, the "UserRoles" login-module was added. To assign groups, you must edit the "spnego-roles.properties" file and add them using this syntax: user@MY_REALM=my-group Please refer to the Red Hat JBoss EAP documentation for more information about how to do this. The SPENGO security-domain delegates the calls relating to Kerberos to the Kerberos server based on the "serverSecurityDomain" property. To customise it, add the following to the SPENGO security domain: Once your security domains have been defined, you need to associate them with Red Hat JBoss Data Virtualization's transport configuration or virtual database configuration. To define a default JDBC transport configuration, add this code: For an ODBC transport, add this code: Table 8.1. Type Values Value Description USERPASSWORD This only allows you to create username and password-based authentication. GSS This allows you to create GSS API-based authentications using Kerberos5. To define a VDB-based authentication, add a combination of the optional following properties to the vdb.xml file: Table 8.2. Table Properties Property Description security-domain Use this to define VDB-based security. authentication-type This allows you to enforce single authentication. gss-pattern This allows you to use GSS. password-pattern This allows you to use USERPASSWORD. During the connection, these regular expressions are matched against the connecting user's name to the user's preferred authentication method. Here is an example: In this case, if "user=logasgss" is passed in the connection string, then GSS authentication will be used to authenticate the user. If there is no match, then the default transport's authentication method is selected. You can configure different security-domains for different virtual databases and authentication will no longer be dependent upon the underlying transport. For instance, if you wish make GSS the permanent default, use this code: Open the {jboss-as}/bin/standalone.conf file in your text editor and add the following JVM options (changing the realm and KDC settings according to your environment): Alternatively, you can use this. Another way of doing this is to add these properties to the standalone.xml file, after the extensions section: Restart the server. There should be no errors. 8.1.1.4. JDBC Client Configuration You must configure your JDBC Client workstation so that it authenticates using the GSS API. The workstation on which the JDBC Client exists must have been authenticated using GSS API against Active Directory or Enterprise directory server. Go to this website for information on this: http://spnego.sourceforge.net You must now add a JAAS configuration for Kerberos authentication to your virtual machine. Here is a sample client.conf file: Check that you have configured the "keytab" properly. For information on how to do this for Microsoft Windows environments, go to this website: http://spnego.sourceforge.net For information on how to do this for Red Hat Enterprise Linux go to this site: https://access.redhat.com/site/solutions/208173 Add the following JVM options to your client's initialization script, customizing the realm and KDC information for your environment This first sample is based on the krb5.conf file: This alternative version is based on the KDC and Realm file: Add the following additional URL connection properties to the Red Hat JBoss Data Virtualization JDBC connection string along with the URL property: Note When you configure it to use Kerberos, you need to configure the "user" property as required by the "gss-pattern" or define the "authentication-type" property on the VDB or transport. However, after successful login into security-domain, the user name from the GSS login context is used instead. Table 8.3. Properties Value Description jaasName This defines the JAAS configuration name in the client.conf file's java.security.auth.login.config property. This property is optional. If it is omitted, "Teiid" is used by default. kerberosServicePrincipleName This defines service principle that is requested on behalf of the service to which you are connecting. If this property is omitted, the default principle used is TEIID/hostname" and hostname is derived from the JDBC connection URL. 8.1.1.5. ODBC Client Configuration Create a DSN for the virtual database on the client machine using the PostgreSQL ODBC driver. In order to participate in Kerberos based authentication you need to configure the "user" property as required by "gss-pattern" or define the "authentication-type" property on the VDB or transport. No additional configuration is needed as part of this, except that your workstation where the ODBC DSN exists must have been authenticated using GSS API against Active Directory or other Enterprise directory server. For more details on this, see http://spnego.sourceforge.net 8.1.1.6. OData Client By default, the OData client is configured to use HTTP Basic authentication. To convert this authentication method into kerberos, clone or copy the maven project from https://github.com/teiid/teiid-web-security. You must update this to the version used in the product (featuring the redhat-x extension) before running Maven. Edit the web.xml and jboss-web.xml files and then replace the MY_REALM property with that of your security domain. Once the properties are updated, create a WAR file by running this command: Copy the WAR file from the odata-kerberos/target directory to replace the original OData WAR file with same name.
[ "<security-domain name=\"host\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"storeKey\" value=\"true\"/> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"principal\" value=\"host/testserver@MY_REALM\"/> <!-- service principal --> <module-option name=\"keyTab\" value=\"/path/to/service.keytab\"/> <module-option name=\"doNotPrompt\" value=\"true\"/> <module-option name=\"debug\" value=\"false\"/> </login-module> </authentication> </security-domain>", "<security-domain name=\"MY_REALM\"> <authentication> <!-- Check the username and password --> <login-module code=\"SPNEGO\" flag=\"requisite\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"serverSecurityDomain\" value=\"host\"/> </login-module> <!-- Search for roles --> <login-module code=\"UserRoles\" flag=\"requisite\"> <module-option name=\"password-stacking\" value=\"useFirstPass\" /> <module-option name=\"usersProperties\" value=\"spnego-users.properties\" /> <module-option name=\"rolesProperties\" value=\"spnego-roles.properties\" /> </login-module> </authentication> </security-domain>", "<module-option name=\"usernamePasswordDomain\" value=\"{user-name-based-auth}\"/>", "<transport name=\"jdbc\" protocol=\"teiid\" socket-binding=\"teiid-jdbc\"/> <authentication security-domain=\"MY_REALM\" type=\"GSS\"/> </transport>", "<transport name=\"odbc\" protocol=\"pg\" socket-binding=\"teiid-odbc\"/> <authentication security-domain=\"MY_REALM\" type=\"GSS\"/> </transport>", "<property name=\"security-domain\" value=\"MY_REALM\" /> <property name=\"gss-pattern\" value=\"{regex}\" /> <property name=\"password-pattern\" value=\"{regex}\" /> <property name=\"authentication-type\" value=\"GSS or USERPASSWORD\" />", "<property name=\"security-domain\" value=\"MY_REALM\" /> <property name=\"gss-pattern\" value=\"logasgss\" />", "<property name=\"security-domain\" value=\"MY_REALM\" /> <property name=\"authentication-type\" value=\"GSS\" />", "JAVA_OPTS = \"USDJAVA_OPTS -Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=kerberos.example.com -Djavax.security.auth.useSubjectCredsOnly=false\"", "JAVA_OPTS = \"USDJAVA_OPTS -Djava.security.krb5.conf=/path/to/krb5.conf -Djava.security.krb5.debug=false -Djavax.security.auth.useSubjectCredsOnly=false\"", "<system-properties> <property name=\"java.security.krb5.conf\" value=\"/pth/to/krb5.conf\"/> <property name=\"java.security.krb5.debug\" value=\"false\"/> <property name=\"javax.security.auth.useSubjectCredsOnly\" value=\"false\"/> </system-properties>", "Teiid { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true storeKey=true useKeyTab=true keyTab=\"/path/to/krb5.keytab\" doNotPrompt=true debug=false principal=\"[email protected]\"; };", "-Djava.security.krb5.conf=/path/to/krb5.conf -Djava.security.auth.login.config=/path/to/client.conf -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.security.krb5.debug=false", "-Djava.security.krb5.realm=EXAMPLE.COM -Djava.security.krb5.kdc=kerberos.example.com -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.security.krb5.debug=false -Djava.security.auth.login.config=/path/to/client.conf", "jaasName=Teiid;user={pattern};kerberosServicePrincipleName=host/testserver@MY_REALM", "mvn clean install" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/chap-kerberos1
Chapter 8. Migrating your applications
Chapter 8. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 8.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 8.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 8.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 8.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc sa get-token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. Require SSL verification : Optional: Select this option to verify SSL connections to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 8.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 8.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 8.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc sa get-token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migration_toolkit_for_containers/migrating-applications-with-mtc
Chapter 4. Managing build output
Chapter 4. Managing build output Use the following sections for an overview of and instructions for managing build output. 4.1. Build output Builds that use the source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification. If the output kind is ImageStreamTag , then the image will be pushed to the integrated OpenShift image registry and tagged in the specified imagestream. If the output is of type DockerImage , then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build. Output to an ImageStreamTag spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" Output to a docker Push Specification spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag" 4.2. Output image environment variables source-to-image (S2I) strategy builds set the following environment variables on output images: Variable Description OPENSHIFT_BUILD_NAME Name of the build OPENSHIFT_BUILD_NAMESPACE Namespace of the build OPENSHIFT_BUILD_SOURCE The source URL of the build OPENSHIFT_BUILD_REFERENCE The Git reference used in the build OPENSHIFT_BUILD_COMMIT Source commit used in the build Additionally, any user-defined environment variable, for example those configured with S2I strategy options, will also be part of the output image environment variable list. 4.3. Output image labels source-to-image (S2I) builds set the following labels on output images: Label Description io.openshift.build.commit.author Author of the source commit used in the build io.openshift.build.commit.date Date of the source commit used in the build io.openshift.build.commit.id Hash of the source commit used in the build io.openshift.build.commit.message Message of the source commit used in the build io.openshift.build.commit.ref Branch or reference specified in the source io.openshift.build.source-location Source URL for the build You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration. Custom labels for built images spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com"
[ "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_buildconfig/managing-build-output
Chapter 52. EntityOperatorTemplate schema reference
Chapter 52. EntityOperatorTemplate schema reference Used in: EntityOperatorSpec Property Property type Description deployment DeploymentTemplate Template for Entity Operator Deployment . pod PodTemplate Template for Entity Operator Pods . topicOperatorContainer ContainerTemplate Template for the Entity Topic Operator container. userOperatorContainer ContainerTemplate Template for the Entity User Operator container. tlsSidecarContainer ContainerTemplate Template for the Entity Operator TLS sidecar container. serviceAccount ResourceTemplate Template for the Entity Operator service account. entityOperatorRole ResourceTemplate Template for the Entity Operator Role. topicOperatorRoleBinding ResourceTemplate Template for the Entity Topic Operator RoleBinding. userOperatorRoleBinding ResourceTemplate Template for the Entity Topic Operator RoleBinding.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-EntityOperatorTemplate-reference
Chapter 1. Overview
Chapter 1. Overview This guide provides recommendations for container development that are supported by Red Hat. Though containers, in their current Docker-driven implementation, are a new and rapidly-developing technology, this guide captures the state of container support within Red Hat. Because there are many use cases for containers, this guide provides general recommendations about fundamental container-related practices that are useful and supported by Red Hat. 1.1. Container Provenance Where do containers come from? How do we get them? How do we make sure that we get them in a secure manner? Containers are built from images, and images are stored in repositories on registries. This topic discusses two registries that the command docker pull can access: The docker.io registry [The Red Hat Registry]( http://registry.access.redhat.com/ ) 1.1.1. The Risks of "docker pull" The docker pull command, used without the registry from which you are pulling, is a potentially dangerous command. Docker makes no distinction between retrieval of software and installation of software. This behavior is different from the case of an RPM: it is safe to use wget to retrieve an RPM that contains malware as long as you do not install the RPM. If is not safe, however, to pull malware using docker pull because retrieving an image is functionally equivalent to installing it. For example, assume that containers are software that upon retrieval runs as privileged. Containers relinquish privilege only after having their settings manipulated. The isolation of containers is usually voluntary, and they are not isolated by default. Warning Exercise caution when using the docker pull command. If a container is not found in the Red Hat Registry, docker pull fails over to the docker.io registry. Red Hat does not verify the security or authenticity of containers from third-party sources, such as docker.io . See also the Image-Naming Conventions chapter. When retrieving images from places other than the Red Hat Registry, avoid docker pull . If possible, use docker load and docker save . You can use docker load and docker save with tarballs and then verify the images. Why would you want to use docker load and docker save instead of docker pull ? docker load and docker save provide a way to avoid the security vulnerability introduced by exposing your system to third-party registries, which could happen when you run docker pull . For more information on container provenance and exercising caution when using docker pull , see Red Hat's Security Blog post Before You Initiate a "docker pull" . docker pull The basic form of docker pull is: USD sudo docker pull repo/image:tag where repo and tag are optional. If repo and tag are not specified, docker will attempt to locate the image in the docker.io registry. For this reason, it is advisable always to explicitly name the registry from which you want to pull the image. If no registry is specified, docker attempts to find an image on the docker.io registry. If no tag is supplied, docker attempts by default to pull the latest image. docker load The basic form of docker load is: USD sudo docker load -i input.tar where input.tar is a tar image to be loaded into your local container registry. The -i is optional and the input.tar file name is optional. If neither -i nor a file name is specified, docker load expects tar data on STDIN. docker save The basic form of docker save is: USD sudo docker save -o output.tar where output.tar is a tar image to be loaded into your local container registry. The -o is optional and the output.tar file name is optional. If neither -o nor a file name is specified, docker save will output container data to STDOUT.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/recommended_practices_for_container_development/overview
Chapter 22. General Updates
Chapter 22. General Updates Addition of CtrlAltDelBurstAction for Systemd The systemd response to multiple CTRL+ALT+DEL events is now configurable by setting the CtrlAltDelBurstAction option in /etc/systemd/system.conf (BZ#1353028) cgred can now resolve rules concerning NSS users and groups Previously, the cgred service was not configured to start up after services providing Name Service Switch (NSS) users and groups. Also, information about skipping invalid rules was shown only in debug mode. Consequently, rules in the cgrules.conf file concerning NSS users and groups were sometimes ignored without any log message. With this update, cgred is configured to start after the nss-user-lookup target and level of log messages about skipping rules is changed to warning, which is also set as a default log level for the cgred daemon. As a result, NSS users and groups are now always resolved before starting cgred . Also, the warning message is logged in case some rules in cgrules.conf are invalid. (BZ#1406927)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_general_updates
13.7. Federated Optimizations
13.7. Federated Optimizations 13.7.1. Access Patterns Access patterns are used on both physical tables and views to specify the need for criteria against a set of columns. Failure to supply the criteria will result in a planning error, rather than a runaway source query. Access patterns can be applied in a set such that only one of the access patterns is required to be satisfied. Currently any form of criteria referencing an affected column may satisfy an access pattern. 13.7.2. Pushdown In federated database systems, pushdown refers to decomposing the user query into source queries that perform as much work as possible on their respective source system. Pushdown analysis requires knowledge of source system capabilities, which is provided to JBoss Data Virtualization though the Connector API. Any work not performed at the source is then processed in the federating system's relational engine (in JBoss Data Virtualization). Based upon capabilities, JBoss Data Virtualization will manipulate the query plan to ensure that each source performs as much joining, filtering, grouping, etc. as possible. In many cases, such as with join ordering, planning combines standard relational techniques (see Section 13.7.9, "Standard Relational Techniques" ) and heuristics based on cost effectiveness to optimize pushdowns. Criteria and join push down are typically the most important aspects of the query to push down when performance is a concern. See Section 13.8.1, "Query Plans" for information about how to read a plan to ensure that source queries are as efficient as possible. 13.7.3. Dependent Joins A special optimization called a dependent join is used to reduce the rows returned from one of the two relations involved in a multi-source join. In a dependent join, queries are issued to each source sequentially rather than in parallel, with the results obtained from the first source used to restrict the records returned from the second. Dependent joins can perform some joins much faster by reducing the amount of data retrieved from the second source and the number of join comparisons that must be performed. The conditions when a dependent join is used are determined by the query planner based on access patterns, hints, and costing information. There are three different kinds of dependent joins that Teiid supports: Join based on in/equality support: where the engine will determine how to break of the queries Key Pushdown: where the translator has access to the full set of key values and determines what queries to send Full Pushdown - where translator ships the all data from the independent side to the translator. Can be used automatically by costing or can be specified as an option in the hint. JBoss Data Virtualization supports hints to control dependent join behavior: MAKEIND - indicates that the clause should be the independent side of a dependent join. MAKEDEP - indicates that the clause should be the dependent side of a join. MAKEDEP as a non-comment hint supports optional max and join arguments - MAKEDEP(JOIN) meaning that the entire join should be pushed, and MAKEDEP(MAX:5000) meaning that the dependent join should only be performed if there are less than the max number of values from the independent side. MAKENOTDEP - prevents the clause from being the dependent side of a join. These can be placed in either the OPTION clause or directly in the FROM clause. As long as all access patterns can be met, the MAKEIND, MAKEDEP, and MAKENOTDEP hints override any use of costing information. MAKENOTDEP supersedes the other hints. Note The MAKEDEP/MAKEIND hint must only be used if the proper query plan is not chosen by default. Ensure that your costing information is representative of the actual source cardinality. An inappropriate MAKEDEP/MAKEIND hint can force an inefficient join structure and may result in many source queries. For IN clauses, the engine will filter the values coming from the dependent side. If the number of values from the independent side exceeds the translators MaxInCriteriaSize, the values will be split into multiple IN predicates up to MaxDependentPredicates. When the number of independent values exceeds MaxInCriteriaSize*MaxDependentPredicates, then multiple dependent queries will be issued in parallel. Note While these hints can be applied to views, the optimizer will by default remove views when possible. This can result in the hint placement being significantly different than that which was originally intended. Consider using the NO_UNNEST hint to prevent the optimizer from removing the view in these cases. A "full pushdown", sometimes also called a "data-ship pushdown", is where all the data from independent side of the join is sent to dependent side. Currently this is only supported in the JDBC translators. To enable it, provide translator override property "enableDependentJoins" to "true". The JDBC source must support creation temp tables (this is determined by using Hibernate dialect capabilities for the source). Once these properties are enabled and MAKEDEP hint is used, the translator will ship the data as temp table contents and push the dependent join to the source for full processing. 13.7.4. Copy Criteria Copy criteria is an optimization that creates additional predicates based upon combining join and where clause criteria. For example, equi-join predicates (source1.table.column = source2.table.column) are used to create new predicates by substituting source1.table.column for source2.table.column and vice versa. In a cross source scenario, this allows for WHERE criteria applied to a single side of the join to be applied to both source queries. 13.7.5. Projection Minimization JBoss Data Virtualization ensures that each pushdown query only projects the symbols required for processing the user query. This is especially helpful when querying through large intermediate view layers. 13.7.6. Partial Aggregate Pushdown Partial aggregate pushdown allows for grouping operations above multi-source joins and unions to be decomposed so that some of the grouping and aggregate functions may be pushed down to the sources. 13.7.7. Optional Join The optional join hint indicates to omit a joined table if none of its columns are used by the output of the user query or in a meaningful way to construct the results of the user query. This hint is typically only used in view layers containing multi-source joins. The optional join hint is applied as a comment on a join clause. It can be applied in both ANSI and non-ANSI joins. With non-ANSI joins an entire joined table may be marked as optional. Example 13.9. Example Optional Join Hint select a.column1, b.column2 from a, /*+ optional */ b WHERE a.key = b.key Suppose this example defines a view layer X. If X is queried in such a way as to not need b.column2, then the optional join hint will cause b to be omitted from the query plan. The result would be the same as if X were defined as: select a.column1 from a Example 13.10. Example ANSI Optional Join Hint In this example the ANSI join syntax allows for the join of a and b to be marked as optional. Suppose this example defines a view layer X. Only if both column a.column1 and b.column2 are not needed, e.g. "SELECT column3 FROM X" will the join be removed. The optional join hint will not remove a bridging table that is still required. Example 13.11. Example Bridging Table select a.column1, b.column2, c.column3 from /*+ optional */ a, b, c WHERE ON a.key = b.key AND a.key = c.key Suppose this example defines a view layer X. If b.column2 or c.column3 are solely required by a query to X, then the join on a can be removed. However if a.column1 or both b.column2 and c.column3 are needed, then the optional join hint will not take effect. Note When a join clause is omitted via the optional join hint, the relevant criteria is not applied. Thus it is possible that the query results may not have the same cardinality or even the same row values as when the join is fully applied. Left/right outer joins where the inner side values are not used and whose rows under go a distinct operation will automatically be treated as an optional join and do not require a hint. Example 13.12. Example Unnecessary Optional Join Hint select a.column1, b.column2 from a LEFT OUTER JOIN /*+optional*/ b ON a.key = b.key Warning A simple "SELECT COUNT(*) FROM VIEW" against a view where all join tables are marked as optional will not return a meaningful result. Source Hints Teiid user and transformation queries can contain a meta source hint that can provide additional information to source queries. The source hint has the form: The source hint is expected to appear after the query (SELECT, INSERT, UPDATE, DELETE) keyword. Source hints may appear in any subquery or in views. All hints applicable to a given source query will be collected and pushed down together as a list. The order of the hints is not guaranteed. The sh arg is optional and is passed to all source queries via the ExecutionContext.getGeneralHints method. The additional args should have a source-name that matches the source name assigned to the translator in the VDB. If the source-name matches, the hint values will be supplied via the ExecutionContext.getSourceHints method. Each of the arg values has the form of a string literal - it must be surrounded in single quotes and a single quote can be escaped with another single quote. Only the Oracle translator does anything with source hints by default. The Oracle translator will use both the source hint and the general hint (in that order) if available to form an Oracle hint enclosed in /*+ ... */. If the KEEP ALIASES option is used either for the general hint or on the applicable source specific hint, then the table/view aliases from the user query and any nested views will be preserved in the push-down query. This is useful in situations where the source hint may need to reference aliases and the user does not wish to rely on the generated aliases (which can be seen in the query plan in the relevant source queries - see above). However in some situations this may result in an invalid source query if the preserved alias names are not valid for the source or result in a name collision. If the usage of KEEP ALIASES results in an error, the query could be modified by preventing view removal with the NO_UNNEST hint, the aliases modified, or the KEEP ALIASES option could be removed and the query plan used to determine the generated alias names. Here are some sample source hints: 13.7.8. Partitioned Union Union partitioning is inferred from the transformation/inline view. If one (or more) of the UNION columns is defined by constants and/or has WHERE clause IN predicates containing only constants that make each branch mutually exclusive, then the UNION is considered partitioned. UNION ALL must be used and the UNION cannot have a LIMIT, WITH, or ORDER BY clause (although individual branches may use LIMIT, WITH, or ORDER BY). Partitioning values should not be null. For example the view definition "select 1 as x, y from foo union all select z, a from foo1 where z in (2, 3)" would be considered partitioned on column x, since the first branch can only be the value 1 and the second branch can only be the values 2 or 3. Note More advanced or explicit partitioning could be considered in the future. The concept of a partitioned union is used for performing partition-wise joins (see Section 4.1, "Updatable Views" and Section 13.7.6, "Partial Aggregate Pushdown" ). 13.7.9. Standard Relational Techniques JBoss Data Virtualization also incorporates many standard relational techniques to ensure efficient query plans. Rewrite analysis for function simplification and evaluation. Boolean optimizations for basic criteria simplification. Removal of unnecessary view layers. Removal of unnecessary sort operations. Advanced search techniques through the left-linear space of join trees. Parallelizing of source access during execution. Subquery optimization ( Section 13.4, "Subquery Optimization" )
[ "select a.column1, b.column2 from a, /*+ optional */ b WHERE a.key = b.key", "select a.column1 from a", "select a.column1, b.column2, c.column3 from /*+ optional */ (a inner join b ON a.key = b.key) INNER JOIN c ON a.key = c.key", "select a.column1, b.column2, c.column3 from /*+ optional */ a, b, c WHERE ON a.key = b.key AND a.key = c.key", "select a.column1, b.column2 from a LEFT OUTER JOIN /*+optional*/ b ON a.key = b.key", "/*+ sh[[ KEEP ALIASES]:'arg'] source-name[ KEEP ALIASES]:'arg1' ... */", "SELECT /*+ sh:'general hint' */", "SELECT /*+ sh KEEP ALIASES:'general hint' my-oracle:'oracle hint' */" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-federated_optimizations
4.4. GLOBAL SETTINGS
4.4. GLOBAL SETTINGS The GLOBAL SETTINGS panel is where the you define the networking details for the primary LVS router's public and private network interfaces. Figure 4.3. The GLOBAL SETTINGS Panel The top half of this panel sets up the primary LVS router's public and private network interfaces. These are the interfaces already configured in Section 3.1.1, "Configuring Network Interfaces for LVS with NAT" . Primary server public IP In this field, enter the publicly routable real IP address for the primary LVS node. Primary server private IP Enter the real IP address for an alternative network interface on the primary LVS node. This address is used solely as an alternative heartbeat channel for the backup router and does not have to correlate to the real private IP address assigned in Section 3.1.1, "Configuring Network Interfaces for LVS with NAT" . You may leave this field blank, but doing so will mean there is no alternate heartbeat channel for the backup LVS router to use and therefore will create a single point of failure. Note The private IP address is not needed for Direct Routing configurations, as all real servers as well as the LVS directors share the same virtual IP addresses and should have the same IP route configuration. Note The primary LVS router's private IP can be configured on any interface that accepts TCP/IP, whether it be an Ethernet adapter or a serial port. Use network type Click the NAT button to select NAT routing. Click the Direct Routing button to select direct routing. The three fields deal specifically with the NAT router's virtual network interface connecting the private network with the real servers. These fields do not apply to the direct routing network type. NAT Router IP Enter the private floating IP in this text field. This floating IP should be used as the gateway for the real servers. NAT Router netmask If the NAT router's floating IP needs a particular netmask, select it from drop-down list. NAT Router device Use this text field to define the device name of the network interface for the floating IP address, such as eth1:1 . Note You should alias the NAT floating IP address to the Ethernet interface connected to the private network. In this example, the private network is on the eth1 interface, so eth1:1 is the floating IP address. Warning After completing this page, click the ACCEPT button to make sure you do not lose any changes when selecting a new panel.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-piranha-globalset-vsa
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/release_notes/making-open-source-more-inclusive
Chapter 29. Managing replication topology
Chapter 29. Managing replication topology You can manage replication between servers in an Identity Management (IdM) domain. When you create a replica, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The data that is replicated is then stored in topology suffixes and when two replicas have a replication agreement between their suffixes, the suffixes form a topology segment. 29.1. Replication agreements between IdM replicas When an administrator creates a replica based on an existing server, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The replication agreement ensures that the data and configuration is continuously replicated between the two servers. IdM uses multiple read/write replica replication . In this configuration, all replicas joined in a replication agreement receive and provide updates, and are therefore considered suppliers and consumers. Replication agreements are always bilateral. Figure 29.1. Server and replica agreements IdM uses two types of replication agreements: Domain replication agreements replicate the identity information. Certificate replication agreements replicate the certificate information. Both replication channels are independent. Two servers can have one or both types of replication agreements configured between them. For example, when server A and server B have only domain replication agreement configured, only identity information is replicated between them, not the certificate information. 29.2. Topology suffixes Topology suffixes store the data that is replicated. IdM supports two types of topology suffixes: domain and ca . Each suffix represents a separate server, a separate replication topology. When a replication agreement is configured, it joins two topology suffixes of the same type on two different servers. The domain suffix: dc= example ,dc= com The domain suffix contains all domain-related data. When two replicas have a replication agreement between their domain suffixes, they share directory data, such as users, groups, and policies. The ca suffix: o=ipaca The ca suffix contains data for the Certificate System component. It is only present on servers with a certificate authority (CA) installed. When two replicas have a replication agreement between their ca suffixes, they share certificate data. Figure 29.2. Topology suffixes An initial topology replication agreement is set up between two servers by the ipa-replica-install script when installing a new replica. 29.3. Topology segments When two replicas have a replication agreement between their suffixes, the suffixes form a topology segment . Each topology segment consists of a left node and a right node . The nodes represent the servers joined in the replication agreement. Topology segments in IdM are always bidirectional. Each segment represents two replication agreements: from server A to server B, and from server B to server A. The data is therefore replicated in both directions. Figure 29.3. Topology segments 29.4. Viewing and modifying the visual representation of the replication topology using the WebUI Using the Web UI, you can view, manipulate, and transform the representation of the replication topology. The topology graph in the web UI shows the relationships between the servers in the domain. You can move individual topology nodes by holding and dragging the mouse. Interpreting the topology graph Servers joined in a domain replication agreement are connected by an orange arrow. Servers joined in a CA replication agreement are connected by a blue arrow. Topology graph example: recommended topology The recommended topology example below shows one of the possible recommended topologies for four servers: each server is connected to at least two other servers, and more than one server is a CA server. Figure 29.4. Recommended topology example Topology graph example: discouraged topology In the discouraged topology example below, server1 is a single point of failure. All the other servers have replication agreements with this server, but not with any of the other servers. Therefore, if server1 fails, all the other servers will become isolated. Avoid creating topologies like this. Figure 29.5. Discouraged topology example: Single Point of Failure Prerequisites You are logged in as an IdM administrator. Procedure Select IPA Server Topology Topology Graph . Make changes to the topology: You can move the topology graph nodes using the left mouse button: You can zoom in and zoom out the topology graph using the mouse wheel: You can move the canvas of the topology graph by holding the left mouse button: If you make any changes to the topology that are not immediately reflected in the graph, click Refresh . 29.5. Viewing topology suffixes using the CLI In a replication agreement, topology suffixes store the data that is replicated. You can view topology suffixes using the CLI. Procedure Enter the ipa topologysuffix-find command to display a list of topology suffixes: Additional resources Topology suffixes 29.6. Viewing topology segments using the CLI In a replication agreement, when two replicas have a replication agreement between their suffixes, the suffixes form a topology segments. You can view topology segments using the CLI. Procedure Enter the ipa topologysegment-find command to show the current topology segments configured for the domain or CA suffixes. For example, for the domain suffix: In this example, domain-related data is only replicated between two servers: server1.example.com and server2.example.com . (Optional) To display details for a particular segment only, enter the ipa topologysegment-show command: Additional resources Topology segments 29.7. Setting up replication between two servers using the Web UI Using the Identity Management (IdM) Web UI, you can choose two servers and create a new replication agreement between them. Prerequisites You are logged in as an IdM administrator. Procedure In the topology graph, hover your mouse over one of the server nodes. Figure 29.6. Domain or CA options Click on the domain or the ca part of the circle depending on what type of topology segment you want to create. A new arrow representing the new replication agreement appears under your mouse pointer. Move your mouse to the other server node, and click on it. Figure 29.7. Creating a new segment In the Add topology segment window, click Add to confirm the properties of the new segment. The new topology segment between the two servers joins them in a replication agreement. The topology graph now shows the updated replication topology: Figure 29.8. New segment created 29.8. Stopping replication between two servers using the Web UI Using the Identity Management (IdM) Web UI, you can remove a replication agreement from servers. Prerequisites You are logged in as an IdM administrator. Procedure Click on an arrow representing the replication agreement you want to remove. This highlights the arrow. Figure 29.9. Topology segment highlighted Click Delete . In the Confirmation window, click OK . IdM removes the topology segment between the two servers, which deletes their replication agreement. The topology graph now shows the updated replication topology: Figure 29.10. Topology segment deleted 29.9. Setting up replication between two servers using the CLI You can configure replication agreements between two servers using the ipa topologysegment-add command. Prerequisites You have the IdM administrator credentials. Procedure Create a topology segment for the two servers. When prompted, provide: The required topology suffix: domain or ca The left node and the right node, representing the two servers [Optional] A custom name for the segment For example: Adding the new segment joins the servers in a replication agreement. Verification Verify that the new segment is configured: 29.10. Stopping replication between two servers using the CLI You can terminate replication agreements from command line using the ipa topology segment-del command. Prerequisites You have the IdM administrator credentials. Procedure Optional. If you do not know the name of the specific replication segment that you want to remove, display all segments available. Use the ipa topologysegment-find command. When prompted, provide the required topology suffix: domain or ca . For example: Locate the required segment in the output. Remove the topology segment joining the two servers: Deleting the segment removes the replication agreement. Verification Verify that the segment is no longer listed: 29.11. Removing server from topology using the Web UI You can use Identity Management (IdM) web interface to remove a server from the topology. This action does not uninstall the server components from the host. Prerequisites You are logged in as an IdM administrator. The server you want to remove is not the only server connecting other servers with the rest of the topology; this would cause the other servers to become isolated, which is not allowed. The server you want to remove is not your last CA or DNS server. Warning Removing a server is an irreversible action. If you remove a server, the only way to introduce it back into the topology is to install a new replica on the machine. Procedure Select IPA Server Topology IPA Servers . Click on the name of the server you want to delete. Figure 29.11. Selecting a server Click Delete Server . Additional resources Uninstalling an IdM server 29.12. Removing server from topology using the CLI You can use the command line to remove an Identity Management (IdM) server from the topology. Prerequisites You have the IdM administrator credentials. The server you want to remove is not the only server connecting other servers with the rest of the topology; this would cause the other servers to become isolated, which is not allowed. The server you want to remove is not your last CA or DNS server. Important Removing a server is an irreversible action. If you remove a server, the only way to introduce it back into the topology is to install a new replica on the machine. Procedure To remove server1.example.com : On another server, run the ipa server-del command to remove server1.example.com . The command removes all topology segments pointing to the server: [Optional] On server1.example.com , run the ipa server-install --uninstall command to uninstall the server components from the machine. 29.13. Removing obsolete RUV records If you remove a server from the IdM topology without properly removing its replication agreements, obsolete replica update vector (RUV) records will remain on one or more remaining servers in the topology. This can happen, for example, due to automation. These servers will then expect to receive updates from the now removed server. In this case, you need to clean the obsolete RUV records from the remaining servers. Prerequisites You have the IdM administrator credentials. You know which replicas are corrupted or have been improperly removed. Procedure List the details about RUVs using the ipa-replica-manage list-ruv command. The command displays the replica IDs: Important The ipa-replica-manage list-ruv command lists ALL replicas in the topology, not only the malfunctioning or improperly removed ones. Remove obsolete RUVs associated with a specified replica using the ipa-replica-manage clean-ruv command. Repeat the command for every replica ID with obsolete RUVs. For example, if you know server1.example.com and server2.example.com are the malfunctioning or improperly removed replicas: Warning Proceed with extreme caution when using ipa-replica-manage clean-ruv . Running the command against a valid replica ID will corrupt all the data associated with that replica in the replication database. If this happens, re-initialize the replica from another replica using USD ipa-replica-manage re-initialize --from server1.example.com . Verification Run ipa-replica-manage list-ruv again. If the command no longer displays any corrupt RUVs, the records have been successfully cleaned. If the command still displays corrupt RUVs, clear them manually using this task: 29.14. Viewing available server roles in the IdM topology using the IdM Web UI Based on the services installed on an IdM server, it can perform various server roles . For example: CA server DNS server Key recovery authority (KRA) server. Procedure For a complete list of the supported server roles, see IPA Server Topology Server Roles . Note Role status absent means that no server in the topology is performing the role. Role status enabled means that one or more servers in the topology are performing the role. Figure 29.12. Server roles in the web UI 29.15. Viewing available server roles in the IdM topology using the IdM CLI Based on the services installed on an IdM server, it can perform various server roles . For example: CA server DNS server Key recovery authority (KRA) server. Procedure To display all CA servers in the topology and the current CA renewal server: Alternatively, to display a list of roles enabled on a particular server, for example server.example.com : Alternatively, use the ipa server-find --servrole command to search for all servers with a particular server role enabled. For example, to search for all CA servers: 29.16. Promoting a replica to a CA renewal server and CRL publisher server If your IdM deployment uses an embedded certificate authority (CA), one of the IdM CA servers acts as the CA renewal server, a server that manages the renewal of CA subsystem certificates. One of the IdM CA servers also acts as the IdM CRL publisher server, a server that generates certificate revocation lists. By default, the CA renewal server and CRL publisher server roles are installed on the first server on which the system administrator installed the CA role using the ipa-server-install or ipa-ca-install command. You can, however, transfer either of the two roles to any other IdM server on which the CA role is enabled. Prerequisites You have the IdM administrator credentials. Procedure Change the current CA renewal server. Configure a replica to generate CRLs. 29.17. Demoting or promoting hidden replicas After a replica has been installed, you can configure whether the replica is hidden or visible. For details about hidden replicas, see The hidden replica mode . Prerequisites Ensure that the replica is not the DNSSEC key master. If it is, move the service to another replica before making this replica hidden. Ensure that the replica is not a CA renewal server. If it is, move the service to another replica before making this replica hidden. For details, see Changing and resetting IdM CA renewal server . Note The hidden replica feature, introduced in RHEL 8.1 as a Technology Preview, is fully supported starting with RHEL 8.2. Procedure To hide a replica: To make a replica visible again: To view a list of all the hidden replicas in your topology: If all of your replicas are enabled, the command output does not mention hidden replicas. Additional resources Planning the replica topology Uninstalling an IdM server Failover, load-balancing, and high-availability in IdM
[ "ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------", "ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------", "ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-add Suffix name: domain Left node: server1.example.com Right node: server2.example.com Segment name [server1.example.com-to-server2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-show Suffix name: domain Segment name: new_segment Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------", "ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------", "ipa topologysegment-find Suffix name: domain ------------------ 7 segments matched ------------------ Segment name: server2.example.com-to-server3.example.com Left node: server2.example.com Right node: server3.example.com Connectivity: both ---------------------------- Number of entries returned 7 ----------------------------", "[user@server2 ~]USD ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ---------------------------------------------------------- Deleted IPA server \"server1.example.com\" ----------------------------------------------------------", "ipa server-install --uninstall", "ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12", "ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5", "dn: cn=clean replica_ID, cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID", "ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA CA renewal master: server1.example.com", "ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, KRA server", "ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------", "ipa server-state replica.idm.example.com --state=hidden", "ipa server-state replica.idm.example.com --state=enabled", "ipa config-show" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/assembly_managing-replication-topology_installing-identity-management
Chapter 3. Configuring DM Multipath
Chapter 3. Configuring DM Multipath You can set up DM Multipath with the mpathconf utility. This utility creates or edits the /etc/multipath.conf multipath configuration file based on the following scenarios: If the /etc/multipath.conf file already exists, the mpathconf utility will edit it. If the /etc/multipath.conf file does not exist, the mpathconf utility will create the /etc/multipath.conf file from scratch. 3.1. Checking for the device-mapper-multipath package Before setting up DM Multipath on your system, ensure that your system is up-to-date and includes the device-mapper-multipath package. Procedure Check if your system includes the device-mapper-multipath package: If your system does not include the package, it prints the following: If your system does not include the package, install it by running the following command: 3.2. Setting up basic failover configuration with DM Multipath You can set up DM Multipath for a basic failover configuration and edit the /etc/multipath.conf file before starting the multipathd daemon. Prerequisites Administrative access. Procedure Enable and initialize the multipath configuration file: Optional: Edit the /etc/multipath.conf file. Most default settings are already configured, including path_grouping_policy which is set to failover . Optional: The default naming format of multipath devices is set to /dev/mapper/mpathn format. If you prefer a different naming format: Configure DM Multipath to use the multipath device WWID as its name, instead of the mpath_n_ user-friendly naming scheme: Reload the configuration of the DM Multipath daemon: Start the DM Multipath daemon: Verification Confirm that the DM Multipath daemon is running without issues: Verify the naming format of multipath devices: 3.3. Ignoring local disks when generating multipath devices Some machines have local SCSI cards for their internal disks and DM Multipath is not recommended for these devices. If you set the find_multipaths configuration parameter to yes , you do not have to disable multipathing on these devices. If you do not set the find_multipaths configuration parameter to yes , you can use the following procedure to modify the DM Multipath configuration file to ignore the local disks when configuring multipath. Procedure Identify the internal disk using any known parameters such as the device's model, path or vendor, and determine its WWID by using any one of the following options: Display existing multipath devices: Display additional multipath devices that DM Multipath could create: Display device information: In this example, /dev/sda is the internal disk and its WWID is WDC_WD800JD-75MSA3_WD-WMAM9FU71040 . Edit the blacklist section of the /etc/multipath.conf file to ignore this device, using its WWID attribute: Warning Although you could identify the device using its devnode parameter, such as sda , it would not be a safe procedure, because /dev/sda is not guaranteed to refer to the same device on reboot. Check for any configuration errors in the /etc/multipath.conf file: To see the full report, do not discard the command output: Remake the initramfs if the disk is included in initramfs . For more information, see Configuring multipathing in initramfs . Reload the /etc/multipath.conf file by reconfiguring the multipathd daemon: Note Multipath devices on top of local disks cannot be removed when in use. To ignore such device, stop all users of the device. For example, by unmounting any filesystem on top of it and deactivating any logical volumes using it. If this is not possible, you can reboot the system to remove the multipath device. Verification Verify that the internal disk is ignored and it is not displayed in the multipath output: List the multipathed devices: List the additional devices that DM Multipath could create: Additional resources multipath.conf(5) man page on your system 3.4. Configuring additional storage with DM Multipath By default, DM Multipath includes built-in configurations for the most common storage arrays, which support DM Multipath. If your storage array does not already have a configuration, you can add one by editting the /etc/multipath.conf file. Note Add additional storage devices during the initial configuration to align the setup with your anticipated needs. DM Multipath enables adding devices later for scalability or upgrades, but this approach may require adjusting configurations to ensuring compatibility. Procedure View the default configuration value and supported devices: Edit the /etc/multipath.conf file to set up your multipath configuration. Example 3.1. DM Multipath Configuration for HP OPEN-V Storage Device Save your changes and close the editor. Update the multipath device list by scanning for new devices: Verification Confirm that the multipath devices are recognized correctly: 3.5. Configuring multipathing in initramfs Setting up multipathing in the initramfs file system is essential for seamless storage functionality, particularly in scenarios requiring redundancy and load balancing. This setup guarantees that multipath devices are available early in the boot process, which is crucial for maintaining the integrity of the storage setup and preventing potential issues. Prerequisites Configured DM multipath on your system. Procedure Rebuild the initramfs file system with the multipath configuration files: Note When using multipath in the initramfs and modifying its configuration files, remember to rebuild the initramfs for the changes to take effect. If your root device employs multipath, the dracut command will automatically include the multipath module in the initramfs . Optional: If multipath in the initramfs is no longer necessary: Remove the multipath configuration file: Rebuild the initramfs with the added multipath configuration: Verification Check if multipath-related files and configurations are present: Note While verefication steps provided can give you an indication of success, a final test boot-up is recommended to ensure that the configuration works as expected. After the reboot, confirm that the multipath devices are recognized correctly:
[ "rpm -q device-mapper-multipath device-mapper-multipath- current-package-version", "package device-mapper-multipath is not installed", "dnf install device-mapper-multipath", "mpathconf --enable", "mpathconf --enable --user_friendly_names n", "systemctl reload multipathd.service", "systemctl start multipathd.service", "systemctl status multipathd.service", "ls /dev/mapper/", "multipath -v2 -l mpatha ( WDC_WD800JD-75MSA3_WD-WMAM9FU71040 ) dm-2 ATA,WDC WD800JD-75MS size=33 GB features=\"0\" hwhandler=\"0\" wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 0:0:0:0 sda 8:0 active undef running", "multipath -v2 -d : mpatha ( WDC_WD800JD-75MSA3_WD-WMAM9FU71040 ) dm-2 ATA,WDC WD800JD-75MS size=33 GB features=\"0\" hwhandler=\"0\" wp=undef `-+- policy='round-robin 0' prio=1 status=undef |- 0:0:0:0 sda 8:0 undef ready running", "multipathd show paths raw format \"%d %w\" | grep sda sda WDC_WD800JD-75MSA3_WD-WMAM9FU71040", "blacklist { wwid WDC_WD800JD-75MSA3_WD-WMAM9FU71040 }", "multipath -t > /dev/null", "multipath -t", "systemctl reload multipathd", "multipath -v2 -l", "multipath -v2 -d", "multipathd show config", "Set default configurations for all devices managed by DM Multipath defaults { # Enable user-friendly names for devices user_friendly_names yes } devices { # Define configuration for HP OPEN-V storage device { vendor \"HP\" pproduct \"OPEN-V\" no_path_retry 18 } }", "multipath -r", "multipath -ll", "dracut --force --add multipath", "rm /etc/dracut.conf.d/multipath.conf", "dracut --force --omit multipath", "lsinitrd /path/to/initramfs.img -m | grep multipath", "multipath -ll" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_device_mapper_multipath/configuring-dm-multipath_configuring-device-mapper-multipath
4.150. libvirt
4.150. libvirt 4.150.1. RHBA-2011:1513 - libvirt bug fix and enhancement update Updated libvirt packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 710150 Due to a bug in the qemuAuditDisk() function, hot unplug failures were never audited, and a hot unplug success was audited as a failure. This bug has been fixed, and auditing of disk hot unplug operations now works as expected. BZ# 711151 Previously, a bug in the qemu-img command line arguments prevented the creation of encrypted volumes. This update fixes the bug, and encrypted volumes can now be successfully created. BZ# 711206 Previously, when a debug process was being activated, the act of preparing a debug message ended up with dereferencing a Universally Unique Identifier (UUID) prior to the NULL argument check. As a consequence, an API running the debug process sometimes terminated unexpectedly with a segmentation fault. With this update, a patch has been applied to address this issue, and crashes no longer occur in the described scenario. BZ# 742646 Due to a programming mistake in the initialization code of the libvirtd daemon, the QEMU driver could have failed to find the user or group ID of the qemu application on the system. As a consequence, libvirtd failed to start. With this update, the error has been corrected and libvirtd now starts as expected. BZ# 741217 If the QEMU driver failed to update information about currently allocated memory, installing a new virtual machine failed with the following error message: With this update, the driver has been modified to not consider this behavior as fatal. Installation now proceeds and finishes as expected. BZ# 690695 Previously, when running the "virsh vol-create-from" command on a Logical Volume Manager (LVM) storage pool, performance of the command was very low and the operation consumed an excessive amount of time. This bug has been fixed in the virStorageVolCreateXMLFrom() function, and the performance problem of the command no longer occurs. BZ# 690175 When migrating a QEMU domain and restarting the libvirtd daemon, the migration was not properly canceled. The domain was left on the target host or ended up in an unexpected state on the source host. With this update, the libvirtd daemon tracks ongoing migrations in a persistent file, and properly cancels them when the daemon is being restarted. BZ# 738146 The "virsh dump" command can fail to dump the core of a domain if the user sets incorrect permissions for the destination directory. Previously, the virsh(1) man page did not provide any information about the permissions required to successfully complete a domain core dump. This information is now included in the man page. BZ# 734773 When shutting down a guest operating system, libvirt killed the QEMU process without giving it enough time to flush all disk I/O buffers. This led in certain cases to loss of data or corruption of the virtual disk. With this update, libvirt gives QEMU enough time to flush the buffers and exits instead of forcibly killing the process. BZ# 738148 When the user started a virtual machine, changed its definition, and migrated the virtual machine, the new settings were not available on the destination. With this update, the settings are transferred to the destination by a live XML file which includes current settings of the running virtual machine. Now, settings are kept during the migration. BZ# 669549 Previously, libvirt did not exercise enough control over whether a domain change should affect the running domain, the persistent configuration, or both. Various virsh commands were inconsistent, and attempts to change a configuration of a running domain did not persist to the boot. With this update, several libvirt commands have new flags to distinguish between live and persistent configurations. The corresponding virsh commands can be used with the "--config" and "--live" flags to provide a more consistent interface. Management applications have finer control over whether various configuration changes affect hot plug, boot, or both. BZ# 674537 Various logic bugs affected the handling of snapshots in libvirt. Among these, restarting the libvirtd daemon would lose track of the current snapshot, and a change in QEMU behavior would trigger a latent bug in libvirt's ability to restore certain snapshots. Snapshots were therefore unreliable and hard to manage. This update provides a number of bug fixes and flags to the existing snapshot management APIs, so that libvirt can provide all the snapshot features, as documented. Management applications can use system checkpoint snapshots for better control when rolling back to known stable states of a virtual machine. BZ# 677229 Previously, libvirt did not support attaching of interfaces to an inactive virtual machine by using the "virsh attach-interface" command. Users had to use workarounds, for example editing the whole domain by executing "virsh edit". This update adds support for attaching interfaces even to inactive virtual machines. As a result, users do not need to use the workarounds, but can use virsh directly. BZ# 727474 Previously, libvirt used an improper separator (comma) in the "lvs" command. This caused the regular expression, which is used to parse the "lvs" output, to not function correctly. In addition, libvirt did not use the right mechanism to format multiple XML "devices" elements for multiple device paths of a striped volume. As a consequence, creation of any logical pool failed for LVM volume groups with striped volumes. With this update, a different separator (hash) is used. Multiple device paths of a striped volume are parsed correctly and multiple XML "devices" elements are formated as expected. Users are now able to create logical pools which contain a striped volume, and get proper XML for the striped volume as well. BZ# 720269 If the source QEMU process was not able to connect to the destination process when migrating a QEMU domain, libvirt could report "undefined error". With this update, libvirt creates the connection to the destination QEMU process and makes QEMU use this pre-created connection. This allows libvirt to report meaningful errors if the connection attempt fails. BZ# 707257 If a NFS (Network File System) storage was configured to be accessible only by users from a supplementary group for a user whose identity was used to run QEMU processes, the libvirtd daemon in certain cases failed to access or create files on that storage. With this update, libvirtd properly initializes supplementary groups when changing identity to QEMU users and groups. This allows libvirtd to access and create such files. BZ# 698825 Previously, it was not possible to maximize the performance of a KVM guest using memory binding on a NUMA (Non-Uniform Memory Access) host if the guest was started by libvirt. This update introduces new XML definitions to support NUMA memory policy configuration. Users can now specify the NUMA memory policy by using the guest XML definitions. The performance can be adjusted by NUMA memory binding. BZ# 704144 The libvirt library uses the "boot=on" option to determine which disk is bootable. The version of the qemu-kvm utility did not support this option, and libvirt could not use it. As a consequence, when an IDE disk was added as the second storage with a virtio being set up as the first one by default, the operating system tried to boot from the IDE disk rather than the virtio disk and either failed to boot with the "No bootable disk" error message, or the system booted whatever operating system was on the IDE disk. With this update, the boot configuration is translated into bootindex, which provides control over which device is used for booting a guest operating system. BZ# 751900 Prior to this update, when a QEMU migration to a file was triggered, libvirt temporarily set the migration bandwith to "unlimited" in an attempt to speed up saving of the state of the virtual machine. A limitation in QEMU caused QEMU not to return from the migrate command until the migration itself was complete. This locked out the QEMU monitor response loop and the migration to file process could not be interrupted. With this update, migration to file can be monitored for progress or interruptions. Now, libvirt no longer ignores job info or abort commands during the migration to file process. BZ# 738970 The virsh(1) man page did not mention detailed information about the drivers used for the "attach-disk" command with QEMU domains. If the command on a QEMU domain failed with an incorrect driver, users were unaware of what driver name should be used with QEMU. To fix this problem, the manual page now specifies what the "driver" parameter can contain. BZ# 693203 Running the "virsh list" command could become unresponsive when a QEMU process tracked by the libvirtd daemon did not respond to the monitor command. With this update, "virsh list" no longer requires interaction with running QEMU processes and can therefore list all domains even if a guest becomes unresponsive. BZ# 691830 If the user wanted to take a screenshot of a running virtual machine, the user had to use other tools (for example, virt-manager). A new libvirt API, virDomainScreenshot, is provided with this update, and allows users to take screenshots if the hypervisor supports it. Now, users no longer need to use third-party tools to take screenshots, but can use libvirt directly. BZ# 682237 SPICE (the Simple Protocol for Independent Computing Environments) supports multiple compression settings for audio, images and streaming. With this update, the libvirt XML schema is extended to support these kinds of settings so that users can set SPICE compression options directly in libvirt. BZ# 682084 Previously, libvirt did not support virtual CPU pinning on inactive virtual machines by running the "virsh vcpupin" command. Users had to use workarounds instead. With this update, libvirt now supports virtual CPU pinning on inactive virtual machines. Users no longer need to use workarounds but can use virsh directly. BZ# 681458 Previously, libvirt did not support attaching devices to an inactive virtual machine by running the "virsh attach-device" command. Users had to use workarounds, for example had to edit the whole domain using "virsh edit". With this update, libvirt provides enhanced support for attaching devices even to inactive virtual machines. User no longer need to use their workarounds but can use virsh directly. BZ# 727088 Previously, the new storage type added to libvirt was not fully supported. As a consequence, directory type storage volumes were reported to be file storage volumes. The new volume type has been added to the public API. The volume type is now correctly reported and displayed in associated tools. BZ# 641087 Users were allowed to change the domain's CPU affinity dynamically in libvirt, however there was no persistent XML provided, and the settings were lost on the domain start. This update introduces a new XML to support the persistent configuration of domain's CPU affinity. Also new flags ("--live, "--config", and "--current") are introduced for the "virsh vcpupin" command. Now, the domain's CPU affinity persists across the start. BZ# 730750 Previously, libvirt attempted to load a managed save file instead of starting a domain from the beginning, even if the managed save file was damaged and could not be loaded. This could confuse users who were not aware of the problem. This update introduces a new command, "virsh start --force-boot", as well as improved logic which ensures that a managed save file is not loaded if it is corrupted. Use of managed save images no longer cause confusion. BZ# 728153 If both the SysV init and upstart scripts were installed, and the libvirtd daemon was managed by upstart, the SysV init script was unaware of this. As a consequence, the SysV init script reported confusing error messages. The user was unable to restart the daemon by using the SysV init script, and was also unaware of the fact, that libvirtd was managed by upstart. With this update, the SysV init script checks whether libvirtd is managed by upstart. In the positive case, the user is advised to use the upstart tools to manage libvirtd. Users are now able to restart the libvirtd daemon while using upstart. BZ# 728428 When restarting the libvirtd daemon, libvirt reloaded the domain configuration from the status XML if the XML existed (the domain was running). However, the original domain configuration was not recorded and the domain configuration could not be restored to the original one. As a consequence, the nonpersistent attached devices still existed after restarting libvirtd. With this update, the original domain configuration is recorded by assigning the persistent domain configuration to the newDef method if it's NULL and the domain is running. The nonpersistent attached devices no longer exist if libvirtd is restarted. BZ# 728654 The broken configuration file caused the libvirtd daemon to exit silently, with no error messages logged or any other indication of a problem. This could have confused the user as a consequence. Error handling messages have been added to the early start phases of libvirtd. Errors which occur during the start are now printed and logged. BZ# 678027 Previously, the DMI (Desktop Management Interface) data was not present on all architectures. Running the "virsh sysinfo" command failed on certain architectures because the DMI data was obtained from the missing /sys/devices/virtual/dmi tree. With this update, the DMI information is no longer fetched on non-Intel architectures. As a result, running the "virsh sysinfo" command works as expected. BZ# 730244 Previously, an invalid variable was used to construct error messages. If a migration command failed, the error message reported the remote URI to be "(null)" instead of the requested migration URI. The reason why the command failed was therefore unknown to the user. This update implements the correct variable which contains the migration URI. As a result, the correct migration URI is now reported if an error occurs. BZ# 667631 The monitor command in QEMU that provides migration information for SPICE was modified. As a consequence, libvirt was unable to send the migration information to SPICE, the session failed, and the migration terminated. This update modifies libvirt to adapt to the new monitor command. As a result, users can now perform a successful migration. BZ# 667624 The monitor command in QEMU that is used to change passwords for VNC and SPICE sessions was changed. As a consequence, libvirt was unable to set any password. This update modifies libvirt to adapt to the new command. As a result, users can successfully set passwords for VNC and SPICE sessions. BZ# 667620 Because QEMU changed the format of SPICE events, libvirt was not able to resend these event to users. This update modifies libvirt to adapt to the new format. As a result, SPICE events are successfully passed to users through libvirt. BZ# 589922 In certain cases, usually when the virt_use_nfs_selinux boolean was not set, SELinux policies prevented qemu from opening a disk image. As a consequence, qemu refused to start. This update provides a verbose error message which advises the user to set virt_use_nfs_selinux in the aforementioned scenario. BZ# 697742 Previously, libvirt did not remove the managed save file if a domain was undefined. When the user installed a new guest after destroying and undefining the one, the managed save file for the guest was still present, and the new guest failed to start because it would use a managed save file with the same name. This update introduces a new API, virDomainUndefineFlags, which allows users to specify flags (for example, "virsh undefine --managed-save"). The managed save file can now be successfully removed. If the user does not specify any option, a comprehensive error message provides additional information. BZ# 722862 Previously, the virsh(1) man page contained duplicate documentation of the "iface-name" command, did not provide sufficient documentation of the "iface-mac" command, and contained certain inconsistent option names. The man page has been modified to provide correct descriptions. BZ# 692355 Previously, libvirt assigned PCI IDs to virtual devices as needed. As a consequence, migration of guests could fail in certain cases. With this update, libvirt reserves specific device IDs for virtual device types, notably 0x01 for IDE controllers and 0x02 for VGA devices. When migrating guests with other device types on these device IDs, users need to manually edit the guest XML files to reassign devices away from reserved IDs. Enhancements BZ# 705814 The libvirt packages have been upgraded to upstream version 0.9.4, which introduces new APIs for libvirt and adds various enhancements over the version. BZ# 632760 In certain scenarios, users want to adjust the traffic of a virtual machine, its specific NIC (Network Interface Controller), or whole virtual network. Prior to this update, users often manually ran scripts to set up traffic shaping. This update extends the network and interface XML definitions. Now, users can set bandwidth limitation, or specify average peak and burst rates directly in libvirt. BZ# 692769 Users can limit virtual CPUs of a virtual machine by using control groups (cgroups). However, the appropriate QEMU process needs to be placed into a specific cgroup. Prior to this update, libvirt was missing this feature, and users had to use their own workarounds. With this update, libvirt can place a process into a cgroup, which can also be specified by using the XML definition of a virtual machine. As a result, users can now set virtual CPU bandwidth limits directly in libvirt. BZ# 711598 With SGA BIOS, it is possible to send boot messages to a serial line instead of a VNC/SPICE session. With this update, libvirt contains enhanced virtual machine XML descriptions so that users can set a serial line that allows the showing of boot messages. Boot messages are now displayed on the serial console as expected. BZ# 643947 The physical network interface configuration can be different on each host machine, even though each host is using the same logical network. This update adds a virtual switch abstraction to libvirt. Virtual machines can be configured identically on every host, even if the physical connectivity is different. BZ# 698340 Previously, libvirt did not support setting of the ioeventfd feature for virtio disks or interfaces. QEMU could experience high CPU usage as a consequence. The support for this feature has been added in the XML definition of a virtual machine. Users can now enable the ioeventfd feature in order to lower CPU usage. BZ# 703851 The address used for listening for VNC connections to a libvirt guest was previously required to be an IP address. In cases where the guest migrates from one host to another, and the administrator wants the guest to be listening on a publicly visible interface, this address must be changed during migration. To make this change possible this update adds the option for specifying a listen network by name. Now, the guest can be migrated between the hosts, and its VNC listen address changes automatically as it migrates. BZ# 598792 The "--persistent" option for the "update-device" command was not implemented in virsh. Users experienced error messages saying that this feature was not supported. This update modifies libvirt to distinguish between the live and persistent XML definition of a virtual machine. Users can now change the definition of a virtual machine while the machine is running. The settings are applied after restarting the virtual machine. BZ# 632498 Running the "virsh dump" command against a virtual machine caused it to dump its memory. However, users often had to manually reboot the virtual machine after performing a dump. A new option, "--reset", has been implemented for "virsh dump", so that users can now use virsh instead of other tools. BZ# 677228 Previously, libvirt did not support attaching disks to an inactive virtual machine by using the "virsh attach-disk" command. Users had to use workarounds instead. This update provides enhanced support for attaching disks; disks can be attached even to inactive virtual machines. Now, users can use virsh directly instead of using workarounds. BZ# 569567 Changes made to host's network configuration by libvirt immediately and permanently modified host's configuration files. This caused the network to be unusable, and it was sometimes difficult to restore the original connectivity. This update adds new virsh commands, so that the current state of the network configuration can be saved and easily reverted. BZ# 634653 When migrating to a file, saving the state of a virtual machine led to creation of large files which filled the system cache. The system performance could therefore be affected. This update introduces the new "--bypass" option for operations that involve migration to file. This prevents the cache from being filled. Management application can now control large virtual machine state files. All users of libvirt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 4.150.2. RHBA-2011:1778 - libvirt bug fix update Updated libvirt packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. The library also provides nwfilter support for fine-grained filtering of the network traffic reaching guests managed by libvirt. Bug Fix BZ# 754182 Previously, nwfilter support was dependent on the ability to execute scripts in the /tmp directory, which is considered unsafe. With this ability blocked, guests relying on the nwfilter component were not allowed to start. The underlying code has been modified so that nwfilter no longer requires to execute scripts in the /tmp directory. All users of libvirt are advised to upgrade to these updated packages, which fix this bug. After installing these updated packages, libvirtd must be restarted. Use the "service libvirtd restart" command for this update to take effect. 4.150.3. RHBA-2012:0013 - libvirt bug fix and enhancement update Updated libvirt packages that fix multiple bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. Bug Fixes BZ# 768469 This update forces all libvirt managed KVM guests with virtio drives to run with the scsi=off option. This will prevent SCSI requests in guests being passed to underlying block devices on the host; however, a separate bug is preventing scsi=off from working correctly. A malicious, privileged guest user could issue a crafted request that would still be passed to the underlying block device. A future qemu-kvm update will correct the scsi=off functionality, blocking such crafted requests, and allowing CVE-2011-4127 to be mitigated before the kernel update is applied. As scsi=off may break legitimate pass through of SCSI requests, this update also adds a new value for the device attribute in the disk XML element, lun . This type is like the default "disk" device, but will allow SCSI requests from guests to be passed to the underlying block device on the host. (Using the lun device attribute causes the guest to run with scsi=on .) Note: After installing the RHSA-2011:1849 kernel update, it will not be possible for guests to issue SCSI requests on virtio drives backed by partitions or LVM volumes, even if device='lun' is used. It will only be possible to issue SCSI requests on virtio drives backed by whole disks. Refer to Red Hat Knowledgebase 67869 for details about CVE-2011-4127. BZ# 769674 Due to an error in the bridge network driver , libvirt did not respect network configuration properly. Therefore, if a network was set with the forward element set to mode=bridge ", libvirt incorrectly added iptables rules for such a network every time the libvirtd daemon was restarted and the network was active. This could cause the network to become inaccessible. With this update, libvirt reloads iptables rules only if the forward element is set to mode=route , mode=nat , or mode=none . BZ# 769853 Previously, migration of a virtual machine failed if the machine had an ISO image attached as a CD-ROM drive and the ISO domain was inactive. With this update, libvirt introduces the new startupPolicy attribute for removable devices, which allows to mark CD-ROM and diskette drives as optional . With this option, virtual machines can now be started or migrated without removable drives if the source image is inaccessible. BZ# 770955 Under certain circumstances, a race condition between asynchronous jobs and query jobs could occur in the QEMU monitor. Consequently, after the QEMU guest was stopped, it failed to start again with the following error message: With this update, libvirt handles this situation properly, and guests now start as expected. BZ# 770957 The libvirt package was missing a dependency on the avahi package. The dependency is required due to mDNS support which is turned on by default. As a consequence, the libvirtd daemon failed to start if the libvirt package was installed on the system without Avahi . With this update, the dependency on avahi is now defined in the libvirt.spec file, and Avahi is installed along with libvirt. BZ# 770958 Due to several problems with security labeling, libvirtd became unresponsive when destroying multiple guest domains with disks on an unreachable NFS storage. This update fixes the security labeling problems and libvirtd no longer hangs under these circumstances. BZ# 770961 Previously, libvirt incorrectly released resources in the macvtap network driver in the underlying code for QEMU. As a consequence, after an attempt to create a virtual machine failed, a macvtap device that was created for the machine could not be deleted from the system. Any virtual machine using the same MAC address could not be created in such a case. With this update, an incorrect function call has been removed, and macvtap devices are properly removed from the system in the scenario described. BZ# 770966 Previously, libvirt defined a hard limit for the maximum number of domains (500) in Python bindings . As a consequence, the vdsmd daemon was unable to properly discover all virtual machines on the system with more than 500 guests. With this update, the number of domains is now determined dynamically and vdsmd correctly discovers all virtual machines. Enhancements BZ# 759061 This update adds support for VMware vSphere Hypervisor (ESXi) 5 installations. BZ# 770959 When shutting down, a virtual machine had changed its status from the Up state to the Paused state before it was shutdown. The Paused state represented the state when the guest had been already stopped, but QEMU was flushing its internal buffers and was waiting for libvirt to kill it. But this state change could confuse users so this update adds respective events and modifies libvirt to use the shutdown state. A virtual machine now moves from the Up to Powering Down and then to Down state. All users of libvirt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 4.150.4. RHBA-2012:0342 - libvirt bug fix update Updated libvirt packages that fix four bugs are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 783453 Under certain circumstances, a rare race condition between the poll() event handler and the dmidecode utility could occur. This race could result in dmidecode waiting indefinitely to perform a read operation on the already closed file descriptor. As a consequence, it was impossible to perform any tasks for virtualized guests using the libvirtd management daemon, or perform certain tasks using the virt-manager utility, such as creating a new virtual machine. This update modifies the underlying code so that the race condition no longer occurs and libvirtd and virt-manager work as expected. BZ# 784785 Previously, when libvirt tried to attach certain SR-IOV (Single Root I/O Virtualization) devices to virtual guests, this attempts failed with the "Unable to reset PCI device" error messages. This patch modifies the underlying code so that these PCI devices can now be successfully attached to guests. BZ# 787620 When migrating a QEMU domain and using SPICE for a remote display, the migration was failing and the display was erratic under certain circumstances. This was happening because with the incoming migration connection open, QEMU was unable to accept any other connections on the target host. With this update, the underlying code has been modified to delay the migration connection until the SPICE client is connected to the target destination. The guest domains can now be successfully migrated without disrupting the display during the migration. BZ# 790779 Previously, if the libvirt package was built with avahi support, libvirt required the avahi package to be installed on the system as a prerequisite for its own installation. If the avahi package could not be installed on the system due to security concerns, installation of libvirt failed. This update modifies the libvirt.spec file to require only the avahi-libs package. The libvirt package is now successfully installed and libvirtd starts as expected. All users of libvirt are advised to upgrade to these updated packages, which fix these bugs. After installing these updated packages, libvirtd must be restarted. Use the "service libvirtd restart" command for this update to take effect. 4.150.5. RHBA-2012:0419 - libvirt bug fix update Updated libvirt packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 798177 If the user attempted to connect locally as a non-root user to the libvirtd daemon (using "qemu:///user"), the ".libvirt" directory was not created in the home directory. As a consequence, non-root users failed to use libvirt. This update ensures that the directory is created, and libvirt now works as expected for non-root users. BZ# 798906 The localtime_r() function used in the libvirt code was not async signal safe, which caused child processes to enter a deadlock when attempting to generate a log message. As a consequence, the virsh utility became unresponsive. This update applies backported patches and adds a new API for generating log time stamps in an async-signal safe manner. The virsh utility no longer hangs under these circumstances. All users of libvirt are advised to upgrade to these updated packages, which fix these bugs. After installing these updated packages, libvirtd must be restarted. Use the "service libvirtd restart" command for this update to take effect. 4.150.6. RHBA-2012:0500 - libvirt bug fix update Updated libvirt packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fix BZ# 806206 When a live migration of a guest was terminated abruptly (using the Ctrl+C key combination), the libvirt daemon could have failed to accept any future migration request of that guest with the following error message: This update adds support for registering cleanup callbacks which are called for a domain when a connection is closed. The migration API is more robust to failures, and if a migration process is terminated, it can be restarted on a subsequent command. All users of libvirt are advised to upgrade to these updated packages, which fix this bug. After installing these updated packages, libvirtd must be restarted. Use the "service libvirtd restart" command for this update to take effect. 4.150.7. RHBA-2012:0727 - libvirt bug fix update Updated libvirt packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 826639 Due to a locking problem in one of the routines involved in the migration process, migrations could become unresponsive, for example, when repeatedly migrating a domain between two nodes. The locking problem has been fixed with this update, and migrating a guest is now successful in this scenario. BZ# 827047 Closing a file descriptor multiple times could, under certain circumstances, lead to a failure to execute the qemu-kvm binary. As a consequence, a guest failed to start. A patch has been applied to address this issue, so that the guest now starts successfully. All users of libvirt are advised to upgrade to these updated packages, which fix these bugs.
[ "ERROR cannot send monitor command '{\"execute\":\"query-balloon\"}': Connection reset by peer", "error: Failed to start domain [domain name] error: Timed out during operation cannot acquire state change lock", "error: Timed out during operation: cannot acquire state change lock" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libvirt
Chapter 3. ProjectRequest [project.openshift.io/v1]
Chapter 3. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string Description is the description to apply to a project displayName string DisplayName is the display name to apply to a project kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 3.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projectrequests GET : list objects of kind ProjectRequest POST : create a ProjectRequest 3.2.1. /apis/project.openshift.io/v1/projectrequests HTTP method GET Description list objects of kind ProjectRequest Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectRequest Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ProjectRequest schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ProjectRequest schema 201 - Created ProjectRequest schema 202 - Accepted ProjectRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/project_apis/projectrequest-project-openshift-io-v1
38.2. Systems Registered with RHN Classic
38.2. Systems Registered with RHN Classic There is no command to specifically unregister a system which is registered with RHN Classic. To delete the registration locally, remove the file with the system ID assigned to the system when it was registered: Note If the system is being unregistered in order to register it with Red Hat Subscription Management (Customer Portal Subscription Management, Subscription Asset Manager, or CloudForms System Engine), then instead of unregistering the system, use the rhn-migrate-classic-to-rhsm script to migrate the system and all its attached subscriptions to the specified Red Hat Subscription Management server. Using the migration scripts is covered in the Subscription Management Guide .
[ "rm -rf /etc/sysconfig/rhn/systemid" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/unregister-rhn
27.2. At and Batch
27.2. At and Batch While Cron is used to schedule recurring tasks, the At utility is used to schedule a one-time task at a specific time and the Batch utility is used to schedule a one-time task to be executed when the system load average drops below 0.8. 27.2.1. Installing At and Batch To determine if the at package is already installed on your system, issue the rpm -q at command. The command returns the full name of the at package if already installed or notifies you that the package is not available. To install the packages, use the yum command in the following form: yum install package To install At and Batch, type the following at a shell prompt: Note that you must have superuser privileges (that is, you must be logged in as root ) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, see Section 8.2.4, "Installing Packages" .
[ "~]# yum install at" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-autotasks-at-batch
8.2.5. Removing Packages
8.2.5. Removing Packages Similarly to package installation, Yum allows you to uninstall (remove in RPM and Yum terminology) both individual packages and a package group. Removing Individual Packages To uninstall a particular package, as well as any packages that depend on it, run the following command as root : yum remove package_name As when you install multiple packages, you can remove several at once by adding more package names to the command. For example, to remove totem , rhythmbox , and sound-juicer , type the following at a shell prompt: Similar to install , remove can take these arguments: package names glob expressions file lists package provides Warning Yum is not able to remove a package without also removing packages which depend on it. This type of operation can only be performed by RPM , is not advised, and can potentially leave your system in a non-functioning state or cause applications to misbehave and/or crash. For further information, see Section B.2.4, "Uninstalling" in the RPM chapter. Removing a Package Group You can remove a package group using syntax congruent with the install syntax: yum groupremove group yum remove @ group The following are alternative but equivalent ways of removing the KDE Desktop group: Important When you tell yum to remove a package group, it will remove every package in that group, even if those packages are members of other package groups or dependencies of other installed packages. However, you can instruct yum to remove only those packages which are not required by any other packages or groups by adding the groupremove_leaf_only=1 directive to the [main] section of the /etc/yum.conf configuration file. For more information on this directive, see Section 8.4.1, "Setting [main] Options" .
[ "~]# yum remove totem rhythmbox sound-juicer", "~]# yum groupremove \"KDE Desktop\" ~]# yum groupremove kde-desktop ~]# yum remove @kde-desktop" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-removing
Appendix A. Java IPv4 and IPv6 properties
Appendix A. Java IPv4 and IPv6 properties You can use Java properties to configure IPv4 and IPv6 addresses. You can subsequently export these properties to Tomcat and use address values to specify Tomcat bindings. A.1. Overview of Java IPv4 and IPv6 properties Java provides two properties that you can use to configure IPv4 and IPv6 addresses: java.net.preferIPv4Stack (default: false) If IPv6 is available, the underlying native socket is an IPv6 socket by default. This socket enables applications to connect and accept connections from IPv4 and IPv6 hosts. If applications use IPv4 sockets only, set this property to true . However, applications that are using IPv4 sockets only cannot communicate with IPv6-only hosts. java.net.preferIPv6Addresses (default: false) If a host has both IPv4 and IPv6 addresses, and IPv6 is available, the default behavior is to use IPv4 addresses over IPv6. This allows backward compatibility. If applications depend on an IPv4 address representation, such as 192.168.1.1, set this property to true to change the preference, and use IPv6 addresses over IPv4 where possible. A.2. Exporting Java IPv4 and IPv6 properties to Tomcat You can export Java IPv4 and IPv6 properties to Tomcat by setting CATALINA_OPTS in the JWS_HOME /tomcat/bin/setenv.* file. On Red Hat Enterprise Linux, the setenv file has a . sh extension. On Microsoft Windows, the setenv file has a .bat extension. Procedure If the JWS_HOME /tomcat/bin/setenv.* file does not exist, create the file. Note If you are using Red Hat Enterprise Linux, create a setenv.sh file. If you are using Microsoft Windows, create a setenv.bat file. To export Java IPv4 and IPv6 properties to Tomcat, perform either of the following steps: If you are using Red Hat Enterprise Linux, enter the following command: If you are using Microsoft Windows, enter the following command: A.3. Configuring Tomcat bindings You can configure Tomcat bindings in the JWS_HOME /tomcat/conf/server.xml file by specifying the IPv6 address. Procedure Open the JWS_HOME /tomcat/conf/server.xml file. To specify the Tomcat binding address, enter the following details: To specify the HTTP connector address, enter the following details: To specify the AJP connector address, enter the following details: Note Ensure that you replace TOMCAT_BINDING_ADDRESS , HTTP_CONNECTOR_ADDRESS , and AJP_CONNECTOR_ADDRESS with the correct IPv6 address.
[ "export \"CATALINA_OPTS=-Djava.net.preferIPv4Stack= YOUR_VALUE -Djava.net.preferIPv6Addresses= YOUR_VALUE \"", "set \"CATALINA_OPTS=-Djava.net.preferIPv4Stack= YOUR_VALUE -Djava.net.preferIPv6Addresses= YOUR_VALUE \"", "<Server ... address=\" TOMCAT_BINDING_ADDRESS \">", "<Connector protocol=\"HTTP/1.1\" ... address=\" HTTP_CONNECTOR_ADDRESS \">", "<Connector protocol=\"AJP/1.3\" ... address=\" AJP_CONNECTOR_ADDRESS \">" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/assembly_java-ipv4-ipv6-properties_jboss_web_server_installation_guide
Chapter 68. Kubernetes
Chapter 68. Kubernetes Since Camel 2.17 The Kubernetes components integrate your application with Kubernetes standalone or on top of Openshift. 68.1. Kubernetes components See the following for usage of each component: Kubernetes ConfigMap Perform operations on Kubernetes ConfigMaps and get notified on ConfigMaps changes. Kubernetes Custom Resources Perform operations on Kubernetes Custom Resources and get notified on Deployment changes. Kubernetes Deployments Perform operations on Kubernetes Deployments and get notified on Deployment changes. Kubernetes Event Perform operations on Kubernetes Events and get notified on Events changes. Kubernetes HPA Perform operations on Kubernetes Horizontal Pod Autoscalers (HPA) and get notified on HPA changes. Kubernetes Job Perform operations on Kubernetes Jobs. Kubernetes Namespaces Perform operations on Kubernetes Namespaces and get notified on Namespace changes. Kubernetes Nodes Perform operations on Kubernetes Nodes and get notified on Node changes. Kubernetes Persistent Volume Perform operations on Kubernetes Persistent Volumes and get notified on Persistent Volume changes. Kubernetes Persistent Volume Claim Perform operations on Kubernetes Persistent Volumes Claims and get notified on Persistent Volumes Claim changes. Kubernetes Pods Perform operations on Kubernetes Pods and get notified on Pod changes. Kubernetes Replication Controller Perform operations on Kubernetes Replication Controllers and get notified on Replication Controllers changes. Kubernetes Resources Quota Perform operations on Kubernetes Resources Quotas. Kubernetes Secrets Perform operations on Kubernetes Secrets. Kubernetes Service Account Perform operations on Kubernetes Service Accounts. Kubernetes Services Perform operations on Kubernetes Services and get notified on Service changes. Openshift Build Config Perform operations on OpenShift Build Configs. Openshift Builds Perform operations on OpenShift Builds. Openshift Deployment Configs Perform operations on Openshift Deployment Configs and get notified on Deployment Config changes. 68.2. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 68.3. Usage 68.3.1. Producer examples Here we show some examples of producer using camel-kubernetes. Create a pod from("direct:createPod") .toF("kubernetes-pods://%s?oauthToken=%s&operation=createPod", host, authToken); By using the KubernetesConstants.KUBERNETES_POD_SPEC header you can specify your PodSpec and pass it to this operation. Delete a pod from("direct:createPod") .toF("kubernetes-pods://%s?oauthToken=%s&operation=deletePod", host, authToken); By using the KubernetesConstants.KUBERNETES_POD_NAME header you can specify your Pod name and pass it to this operation. 68.4. Using Kubernetes ConfigMaps and Secrets The camel-kubernetes component also provides functions that loads the property values from Kubernetes`ConfigMaps` or Secrets . For more information see Property Placeholder . 68.5. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "from(\"direct:createPod\") .toF(\"kubernetes-pods://%s?oauthToken=%s&operation=createPod\", host, authToken);", "from(\"direct:createPod\") .toF(\"kubernetes-pods://%s?oauthToken=%s&operation=deletePod\", host, authToken);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-component-starter
1.7. Virtualization
1.7. Virtualization System monitoring via SNMP, BZ# 642556 This feature provides KVM support for stable technology that is already used in data center with bare metal systems. SNMP is the standard for monitoring and is extremely well understood as well as computationally efficient. System monitoring via SNMP in Red Hat Enterprise Linux 6.2 allows the KVM hosts to send SNMP traps on events so that hypervisor events can be communicated to the user via standard SNMP protocol. This feature is provided through the addition of a new package: libvirt-snmp . This feature is introduced as a Technology Preview. Wire speed requirement in KVM network drivers Virtualization and cloud products that run networking work loads need to run wire speeds. Up until Red Hat Enterprise Linux 6.1, the only way to reach wire speed on a 10 GB Ethernet NIC with a lower CPU utilization was to use PCI device assignment (passthrough), which limits other features like memory overcommit and guest migration The macvtap / vhost zero-copy capabilities allows the user to use those features when high performance is required. This feature improves performance for any Red Hat Enterprise Linux 6.x guest in the VEPA use case. This feature is introduced as a Technology Preview.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/virtualization_tp
Chapter 7. Virtual machines
Chapter 7. Virtual machines 7.1. Creating VMs from Red Hat images 7.1.1. Creating virtual machines from Red Hat images overview Red Hat images are golden images . They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images project as snapshots or persistent volume claims (PVCs). Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates . Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console . You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods: Creating a VM from a template by using the web console Creating a VM from an instance type by using the web console Creating a VM from a VirtualMachine manifest by using the command line Important Do not create VMs in the default openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. 7.1.1.1. About golden images A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently. 7.1.1.1.1. How do golden images work? Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences. After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image. 7.1.1.1.2. Red Hat implementation of golden images Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs. 7.1.1.2. About VM boot sources Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications. Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the default storage class. 7.1.2. Creating virtual machines from templates You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console. 7.1.2.1. About VM templates Boot sources You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label. Templates without a boot source are labeled Boot source required . See Creating virtual machines from custom images . Customization You can customize the disk source and VM parameters before you start the VM: See storage volume types and storage fields for details about disk source settings. See the Overview , YAML , and Configuration tab documentation for details about VM settings. Note If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console . Single-node OpenShift Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles. 7.1.2.2. Creating a VM from a template You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console. Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. Procedure Navigate to Virtualization Catalog in the web console. Click Boot source available to filter templates with boot sources. The catalog displays the default templates. Click All Items to view all available templates for your filters. Click a template tile to view its details. Click Quick create VirtualMachine to create a VM from the template. Optional: Customize the template or VM parameters: Click Customize VirtualMachine . Expand Storage or Optional parameters to edit data source settings. Click Customize VirtualMachine parameters . The Customize and create VirtualMachine pane displays the Overview , YAML , Scheduling , Environment , Network interfaces , Disks , Scripts , and Metadata tabs. Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key. Click Create VirtualMachine . The VirtualMachine details page displays the provisioning status. 7.1.2.2.1. Storage volume types Table 7.1. Storage volume types Type Description ephemeral A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim . The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. persistentVolumeClaim Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. dataVolume Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. Specify type: dataVolume or type: "" . If you specify any other value for type , such as persistentVolumeClaim , a warning is displayed, and the virtual machine does not start. cloudInitNoCloud Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. containerDisk References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched. A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines. emptyDisk Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. 7.1.2.2.2. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 7.1.2.2.3. Customizing a VM template by using the web console You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove the deprecated designation from the customized template. Procedure Navigate to Virtualization Templates in the web console. From the list of VM templates, click the template marked as deprecated. Click Edit to the pencil icon beside Labels . Remove the following two labels: template.kubevirt.io/type: "base" template.kubevirt.io/version: "version" Click Save . Click the pencil icon beside the number of existing Annotations . Remove the following annotation: template.kubevirt.io/deprecated Click Save . 7.1.2.2.4. Creating a custom VM template in the web console You create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console. Procedure In the web console, click Virtualization Templates in the side menu. Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the openshift project by default. Click Create Template . Specify the template parameters by editing the YAML file. Click Create . The template is displayed on the Templates page. Optional: Click Download to download and save the YAML file. 7.1.3. Creating virtual machines from instance types You can create virtual machines (VMs) from instance types by using the OpenShift Container Platform web console. Important Creating a VM from an instance type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.1.3.1. Creating a VM from an instance type You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Procedure In the web console, navigate to Virtualization Catalog and click the InstanceTypes tab. Select a bootable volume. Note The volume table only lists volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Click an instance type tile and select the configuration appropriate for your workload. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.1.4. Creating virtual machines from the command line You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine manifest. 7.1.4.1. Creating a VM from a VirtualMachine manifest You can create a virtual machine (VM) from a VirtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM: Example 7.1. Example manifest for a RHEL VM 1 Specify the name of the virtual machine. 2 Specify the name in the spec.dataImportCronTemplate.spec.managedDataSource field in the Hyperconvered CR. 3 Specify the password for cloud-user. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> -n <namespace> 7.2. Creating VMs from custom images 7.2.1. Creating virtual machines from custom images overview You can create virtual machines (VMs) from custom operating system images by using one of the following methods: Importing the image as a container disk from a registry . Optional: You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Importing the image from a web page . Uploading the image from a local machine . Cloning a persistent volume claim (PVC) that contains the image . The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the OpenShift Container Platform web console or command line. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. The QEMU guest agent is included with Red Hat images. 7.2.2. Creating VMs by using container disks You can create virtual machines (VMs) by using container disks built from operating system images. You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Important If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can perform the following tasks to resolve this issue: Pruning DeploymentConfig objects . Configuring garbage collection . You create a VM from a container disk by performing the following steps: Build an operating system image into a container disk and upload it to your container registry . If your container registry does not have TLS, configure your environment to disable TLS for your registry . Create a VM with the container disk as the disk source by using the web console or the command line . Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.2.1. Building and uploading a container disk You can build a virtual machine (VM) image into a container disk and upload it to a registry. The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites You must have podman installed. You must have a QCOW2 or RAW image file. Procedure Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the image in either QCOW2 or RAW format. If you use a remote image, replace <vm_image>.qcow2 with the complete URL. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest 7.2.2.2. Disabling TLS for a container registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add a list of insecure registries to the spec.storageImport.insecureRegistries field. Example HyperConverged custom resource apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 7.2.2.3. Creating a VM from a container disk by using the web console You can create a virtual machine (VM) by importing a container disk from a container registry by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.2.4. Creating a VM from a container disk by using the command line You can create a virtual machine (VM) from a container disk by using the command line. When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage. Prerequisites You must have access credentials for the container registry that contains the container disk. Procedure If the container registry requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2 1 Specify the Base64-encoded key ID or user name. 2 Specify the Base64-encoded secret key or password. Apply the Secret manifest by running the following command: USD oc apply -f data-source-secret.yaml If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM: USD oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2 1 Specify the config map name. 2 Specify the path to the CA certificate. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: If you do not specify a storage class, the default storage class is used. 5 Specify the URL of the container registry. 6 Optional: Specify the secret name if you created a secret for the container registry access credentials. 7 Optional: Specify a CA certificate config map. Create the VM by running the following command: USD oc create -f vm-fedora-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 7.2.3. Creating VMs by importing images from web pages You can create virtual machines (VMs) by importing operating system images from web pages. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.3.1. Creating a VM from an image on a web page by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. Prerequisites You must have access to the web page that contains the image. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.3.2. Creating a VM from an image on a web page by using the command line You can create a virtual machine (VM) from an image on a web page by using the command line. When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage. Prerequisites You must have access credentials for the web page that contains the image. Procedure If the web page requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2 1 Specify the Base64-encoded key ID or user name. 2 Specify the Base64-encoded secret key or password. Apply the Secret manifest by running the following command: USD oc apply -f data-source-secret.yaml If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM: USD oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2 1 Specify the config map name. 2 Specify the path to the CA certificate. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 5 registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: If you do not specify a storage class, the default storage class is used. 5 6 Specify the URL of the web page. 7 Optional: Specify the secret name if you created a secret for the web page access credentials. 8 Optional: Specify a CA certificate config map. Create the VM by running the following command: USD oc create -f vm-fedora-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 7.2.4. Creating VMs by uploading images You can create virtual machines (VMs) by uploading operating system images from your local machine. You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. 7.2.4.1. Creating a VM from an uploaded image by using the web console You can create a virtual machine (VM) from an uploaded operating system image by using the OpenShift Container Platform web console. Prerequisites You must have an IMG , ISO , or QCOW2 image file. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list. Browse to the image on your local machine and set the disk size. Click Customize VirtualMachine . Click Create VirtualMachine . 7.2.4.2. Creating a Windows VM You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the OpenShift Container Platform web console. Prerequisites You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation. You created an autounattend.xml answer file. See Answer files (unattend.xml) in the Microsoft documentation. Procedure Upload the Windows image as a new PVC: Navigate to Storage PersistentVolumeClaims in the web console. Click Create PersistentVolumeClaim With Data upload form . Browse to the Windows image and select it. Enter the PVC name, select the storage class and size and then click Upload . The Windows image is uploaded to a PVC. Configure a new VM by cloning the uploaded PVC: Navigate to Virtualization Catalog . Select a Windows template tile and click Customize VirtualMachine . Select Clone (clone PVC) from the Disk source list. Select the PVC project, the Windows image PVC, and the disk size. Apply the answer file to the VM: Click Customize VirtualMachine parameters . On the Sysprep section of the Scripts tab, click Edit . Browse to the autounattend.xml answer file and click Save . Set the run strategy of the VM: Clear Start this VirtualMachine after creation so that the VM does not start immediately. Click Create VirtualMachine . On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save . Click the options menu and select Start . The VM boots from the sysprep disk containing the autounattend.xml answer file. 7.2.4.2.1. Generalizing a Windows VM image You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM). Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation. Prerequisites A running Windows VM with the QEMU guest agent installed. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines . Select a Windows VM to open the VirtualMachine details page. Click Configuration Disks . Click the Options menu beside the sysprep disk and select Detach . Click Detach . Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool. Start the sysprep program by running the following command: %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. You can now specialize the VM. 7.2.4.2.2. Specializing a Windows VM image Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. Prerequisites You must have a generalized Windows disk image. You must create an unattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog . Select a Windows template and click Customize VirtualMachine . Select PVC (clone PVC) from the Disk source list. Select the PVC project and PVC name of the generalized Windows image. Click Customize VirtualMachine parameters . Click the Scripts tab. In the Sysprep section, click Edit , browse to the unattend.xml answer file, and click Save . Click Create VirtualMachine . During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use. Additional resources for creating Windows VMs Microsoft, Sysprep (Generalize) a Windows installation Microsoft, generalize Microsoft, specialize 7.2.4.3. Creating a VM from an uploaded image by using the command line You can upload an operating system image by using the virtctl command line tool. You can use an existing data volume or create a new data volume for the image. Prerequisites You must have an ISO , IMG , or QCOW2 operating system image file. For best performance, compress the image file by using the virt-sparsify tool or the xz or gzip utilities. You must have virtctl installed. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Upload the image by running the virtctl image-upload command: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. When you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 7.2.5. Creating VMs by cloning PVCs You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images. You clone a PVC by creating a data volume that references a source PVC. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.5.1. Creating a VM from a PVC by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the OpenShift Container Platform web console. Prerequisites You must have access to the web page that contains the image. You must have access to the namespace that contains the source PVC. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Select the PVC project and the PVC name. Set the disk size. Click . Click Create VirtualMachine . 7.2.5.2. Creating a VM from a PVC by using the command line You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line. You can clone a PVC by using one of the following options: Cloning a PVC to a new data volume. This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning a PVC by creating a VirtualMachine manifest with a dataVolumeTemplates stanza. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. 7.2.5.2.1. Cloning a PVC to a data volume You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line. You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt content type. Note Smart-cloning is faster and more efficient than host-assisted cloning because it uses snapshots to clone PVCs. Smart-cloning is supported by storage providers that support snapshots, such as Red Hat OpenShift Data Foundation. Cloning between different volume modes is not supported for smart-cloning. Prerequisites The VM with the source PVC must be powered down. If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace. Additional prerequisites for smart-cloning: Your storage provider must support snapshots. The source and target PVCs must have the same storage provider and volume mode. The value of the driver key of the VolumeSnapshotClass object must match the value of the provisioner key of the StorageClass object as shown in the following example: Example VolumeSnapshotClass object kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ... Example StorageClass object kind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com Procedure Create a DataVolume manifest as shown in the following example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: "<source_namespace>" 2 name: "<my_vm_disk>" 3 storage: {} 1 Specify the name of the new data volume. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the data volume by running the following command: USD oc create -f <datavolume>.yaml Note Data volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned. 7.2.5.2.2. Creating a VM from a cloned PVC by using a data volume template You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. Prerequisites The VM with the source PVC must be powered down. Procedure Create a VirtualMachine manifest as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: "<source_pvc>" 3 1 Specify the name of the VM. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml 7.2.6. Installing the QEMU guest agent and VirtIO drivers The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.6.1. Installing the QEMU guest agent 7.2.6.1.1. Installing the QEMU guest agent on a Linux VM The qemu-guest-agent is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Log in to the VM by using a console or SSH. Install the QEMU guest agent by running the following command: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent Verification Run the following command to verify that AgentConnected is listed in the VM spec: USD oc get vm <vm_name> 7.2.6.1.2. Installing the QEMU guest agent on a Windows VM For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure In the Windows guest operating system, use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive. Run the qemu-ga-x86_64.msi installer. Verification Obtain a list of network services by running the following command: USD net start Verify that the output contains the QEMU Guest Agent . 7.2.6.2. Installing VirtIO drivers on Windows VMs VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download. The container-native-virtualization/virtio-win container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the VM. Table 7.2. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 7.2.6.2.1. Attaching VirtIO container disk to Windows VMs during installation You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM. Procedure When creating a Windows VM from a template, click Customize VirtualMachine . Select Mount Windows drivers disk . Click the Customize VirtualMachine parameters . Click Create VirtualMachine . After the VM is created, the virtio-win SATA CD disk will be attached to the VM. 7.2.6.2.2. Attaching VirtIO container disk to an existing Windows VM You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM. Procedure Navigate to the existing Windows VM, and click Actions Stop . Go to VM Details Configuration Disks and click Add disk . Add windows-driver-disk from container source, set the Type to CD-ROM , and then set the Interface to SATA . Click Save . Start the VM, and connect to a graphical console. 7.2.6.2.3. Installing VirtIO drivers during Windows installation You can install the VirtIO drivers while installing Windows on a virtual machine (VM). Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Prerequisites A storage device containing the virtio drivers must be attached to the VM. Procedure In the Windows operating system, use the File Explorer to navigate to the virtio-win CD drive. Double-click the drive to run the appropriate installer for your VM. For a 64-bit vCPU, select the virtio-win-gt-x64 installer. 32-bit vCPUs are no longer supported. Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default. After the installation is complete, select Finish . Reboot the VM. Verification Open the system disk on the PC. This is typically C: . Navigate to Program Files Virtio-Win . If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful. 7.2.6.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM). Note This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps. Prerequisites A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive. Procedure Start the VM and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the VM to complete the driver installation. 7.2.6.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive. Tip Downloading the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time. Prerequisites You must have access to the Red Hat registry or to the downloaded container-native-virtualization/virtio-win container disk in a restricted environment. Procedure Add the container-native-virtualization/virtio-win container disk as a CD drive by editing the VirtualMachine manifest: # ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots the VM disks in the order defined in the VirtualMachine manifest. You can either define other VM disks that boot before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks. Apply the changes: If the VM is not running, run the following command: USD virtctl start <vm> -n <namespace> If the VM is running, reboot the VM or run the following command: USD oc apply -f <vm.yaml> After the VM has started, install the VirtIO drivers from the SATA CD drive. 7.2.6.3. Updating VirtIO drivers 7.2.6.3.1. Updating VirtIO drivers on a Windows VM Update the virtio drivers on a Windows virtual machine (VM) by using the Windows Update service. Prerequisites The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service. Procedure In the Windows Guest operating system, click the Windows key and select Settings . Navigate to Windows Update Advanced Options Optional Updates . Install all updates from Red Hat, Inc. . Reboot the VM. Verification On the Windows VM, navigate to the Device Manager . Select a device. Select the Driver tab. Click Driver Details and confirm that the virtio driver details displays the correct version. 7.3. Connecting to virtual machine consoles You can connect to the following consoles to access running virtual machines (VMs): VNC console Serial console Desktop viewer for Windows VMs 7.3.1. Connecting to the VNC console You can connect to the VNC console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool. 7.3.1.1. Connecting to the VNC console by using the web console You can connect to the VNC console of a virtual machine (VM) by using the OpenShift Container Platform web console. Note If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list. Select Ctl + Alt + 1 from the Send key list to restore the default display. To end the console session, click outside the console pane and then click Disconnect . 7.3.1.2. Connecting to the VNC console by using virtctl You can use the virtctl command line tool to connect to the VNC console of a running virtual machine. Note If you run the virtctl vnc command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh command with the -X or -Y flags. Prerequisites You must install the virt-viewer package. Procedure Run the following command to start the console session: USD virtctl vnc <vm_name> If the connection fails, run the following command to collect troubleshooting information: USD virtctl vnc <vm_name> -v 4 7.3.1.3. Generating a temporary token for the VNC console Generate a temporary authentication bearer token for the Kubernetes API to access the VNC of a virtual machine (VM). Note Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command. Prerequisites A running virtual machine with OpenShift Virtualization 4.14 or later and ssp-operator 4.14 or later Procedure Enable the feature gate in the HyperConverged ( HCO ) custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]' Generate a token by running the following command: USD curl --header "Authorization: Bearer USD{TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>" 1 1 Duration can be in hours and minutes, with a minimum duration of 10 minutes. Example: 5h30m . The token is valid for 10 minutes by default if this parameter is not set. Sample output: { "token": "eyJhb..." } Optional: Use the token provided in the output to create a variable: USD export VNC_TOKEN="<token>" You can now use the token to access the VNC console of a VM. Verification Log in to the cluster by running the following command: USD oc login --token USD{VNC_TOKEN} Use virtctl to test access to the VNC console of the VM by running the following command: USD virtctl vnc <vm_name> -n <namespace> 7.3.2. Connecting to the serial console You can connect to the serial console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool. Note Running concurrent VNC connections to a single virtual machine is not currently supported. 7.3.2.1. Connecting to the serial console by using the web console You can connect to the serial console of a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Serial console from the console list. To end the console session, click outside the console pane and then click Disconnect . 7.3.2.2. Connecting to the serial console by using virtctl You can use the virtctl command line tool to connect to the serial console of a running virtual machine. Procedure Run the following command to start the console session: USD virtctl console <vm_name> Press Ctrl+] to end the console session. 7.3.3. Connecting to the desktop viewer You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP). 7.3.3.1. Connecting to the desktop viewer by using the web console You can connect to the desktop viewer of a Windows virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You installed the QEMU guest agent on the Windows VM. You have an RDP client installed. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Desktop viewer from the console list. Click Create RDP Service to open the RDP Service dialog. Select Expose RDP Service and click Save to create a node port service. Click Launch Remote Desktop to download an .rdp file and launch the desktop viewer. 7.4. Configuring SSH access to virtual machines You can configure SSH access to virtual machines (VMs) by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address. 7.4.1. Access configuration considerations Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster. If the internal cluster network cannot handle the traffic load, you can configure a secondary network. virtctl ssh and virtctl port-forwarding commands Simple to configure. Recommended for troubleshooting VMs. virtctl port-forwarding recommended for automated configuration of VMs with Ansible. Dynamic public SSH keys can be used to provision VMs with Ansible. Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server. The API server must be able to handle the traffic load. The clients must be able to access the API server. The clients must have access credentials for the cluster. Cluster IP service The internal cluster network must be able to handle the traffic load. The clients must be able to access an internal cluster IP address. Node port service The internal cluster network must be able to handle the traffic load. The clients must be able to access at least one node. Load balancer service A load balancer must be configured. Each node must be able to handle the traffic load of one or more load balancer services. Secondary network Excellent performance because traffic does not go through the internal cluster network. Allows a flexible approach to network topology. Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network. 7.4.2. Using virtctl ssh You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh command. This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server. 7.4.2.1. About static and dynamic SSH key management You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. Static SSH key management You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot. You can add the key by using one of the following methods: Add a key to a single VM when you create it by using the web console or the command line. Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project. Use cases As a VM owner, you can provision all your newly created VMs with a single key. Dynamic SSH key management You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources. When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM. Use cases Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a Secret object that is applied to all VMs in a namespace. User access: You can add your access credentials to all VMs that you create and manage. Ansible provisioning: As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning. As a VM owner, you can create a VM and attach the keys used for Ansible provisioning. Key rotation: As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace. As a workload owner, you can rotate the key for the VMs that you manage. 7.4.2.2. Static key management You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time. You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create. Note If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually. 7.4.2.2.1. Adding a key when creating a VM from a template You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile. The guest operating system must support configuration from a cloud-init data source. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 7.4.2.2.2. Adding a key when creating a VM from an instance type You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Procedure In the web console, navigate to Virtualization Catalog and click the InstanceTypes tab. Select a bootable volume. Note The volume table only lists volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Click an instance type tile and select the configuration appropriate for your workload. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.4.2.2.3. Adding a key when creating a VM by using the command line You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot. The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: example-vm-disk spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: example-vm spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: example-volume name: example-vm-disk - cloudInitConfigDrive: <.> userData: |- #cloud-config user: cloud-user password: <password> chpasswd: { expire: False } name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: authorized-keys <.> --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: | MIIEpQIBAAKCAQEAulqb/Y... <.> <.> Specify cloudInitConfigDrive to create a configuration drive. <.> Specify the Secret object name. <.> Paste the public SSH key. Create the VirtualMachine and Secret objects: USD oc create -f <manifest_file>.yaml Start the VM: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: authorized-keys # ... 7.4.2.3. Dynamic key management You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created. 7.4.2.3.1. Enabling dynamic key injection when creating a VM from a template You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the OpenShift Container Platform web console. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click the Red Hat Enterprise Linux 9 VM tile. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 7.4.2.3.2. Enabling dynamic key injection when creating a VM from an instance type You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Then, you can add or revoke the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Procedure In the web console, navigate to Virtualization Catalog and click the InstanceTypes tab. Select a bootable volume. Note The volume table only lists volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Click an instance type tile and select the configuration appropriate for your workload. Click the Red Hat Enterprise Linux 9 VM tile. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Set Dynamic SSH key injection in the VirtualMachine details section to on. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.4.2.3.3. Enabling dynamic SSH key injection by using the web console You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console. Then, you can update the public SSH key at runtime. The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9. Prerequisites The guest operating system is RHEL 9. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Configuration tab, click Scripts . If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . 7.4.2.3.4. Enabling dynamic key injection by using the command line You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: example-vm-disk spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: example-vm spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: example-volume name: example-vm-disk - cloudInitConfigDrive: <.> userData: |- #cloud-config user: cloud-user password: <password> chpasswd: { expire: False } runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["user1","user2","fedora"] <.> source: secret: secretName: authorized-keys <.> --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: | MIIEpQIBAAKCAQEAulqb/Y... <.> <.> Specify cloudInitConfigDrive to create a configuration drive. <.> Specify the user names. <.> Specify the Secret object name. <.> Paste the public SSH key. Create the VirtualMachine and Secret objects: USD oc create -f <manifest_file>.yaml Start the VM: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["user1","user2","fedora"] source: secret: secretName: authorized-keys # ... 7.4.2.4. Using the virtctl ssh command You can access a running virtual machine (VM) by using the virtcl ssh command. Prerequisites You installed the virtctl command line tool. You added a public SSH key to the VM. You have an SSH client installed. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Run the virtctl ssh command: USD virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1 1 Specify the namespace, user name, and the SSH private key. The default SSH key location is /home/user/.ssh . If you save the key in a different location, you must specify the path. Example USD virtctl -n my-namespace ssh cloud-user@example-vm -i my-key Tip You can copy the virtctl ssh command in the web console by selecting Copy SSH command from the options menu beside a VM on the VirtualMachines page . 7.4.3. Using the virtctl port-forward command You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs. This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server. Prerequisites You have installed the virtctl client. The virtual machine you want to access is running. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Add the following text to the ~/.ssh/config file on your client machine: Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p Connect to the VM by running the following command: USD ssh <user>@vm/<vm_name>.<namespace> 7.4.4. Using a service for SSH access You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls. If the cluster network cannot handle the traffic load, consider using a secondary network for VM access. 7.4.4.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. 7.4.4.2. Creating a service You can create a service to expose a virtual machine (VM) by using the OpenShift Container Platform web console, virtctl command line tool, or a YAML file. 7.4.4.2.1. Enabling load balancer service creation by using the web console You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You have configured a load balancer for the cluster. You are logged in as a user with the cluster-admin role. You created a network attachment definition for the network. Procedure Navigate to Virtualization Overview . On the Settings tab, click Cluster . Expand LoadBalancer service and select Enable the creation of LoadBalancer services for SSH connections to VirtualMachines . 7.4.4.2.2. Creating a service by using the web console You can create a node port or load balancer service for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You configured the cluster network to support either a load balancer or a node port. To create a load balancer service, you enabled the creation of load balancer services. Procedure Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page. On the Details tab, select SSH over LoadBalancer from the SSH service type list. Optional: Click the copy icon to copy the SSH command to your clipboard. Verification Check the Services pane on the Details tab to view the new service. 7.4.4.2.3. Creating a service by using virtctl You can create a service for a virtual machine (VM) by using the virtctl command line tool. Prerequisites You installed the virtctl command line tool. You configured the cluster network to support the service. The environment where you installed virtctl has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Create a service by running the following command: USD virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1 1 Specify the ClusterIP , NodePort , or LoadBalancer service type. Example USD virtctl expose vm example-vm --name example-service --type NodePort --port 22 Verification Verify the service by running the following command: USD oc get service steps After you create a service with virtctl , you must add special: key to the spec.template.metadata.labels stanza of the VirtualMachine manifest. See Creating a service by using the command line . 7.4.4.2.4. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 7.4.4.3. Connecting to a VM exposed by a service by using SSH You can connect to a virtual machine (VM) that is exposed by a service by using SSH. Prerequisites You created a service to expose the VM. You have an SSH client installed. You are logged in to the cluster. Procedure Run the following command to access the VM: USD ssh <user_name>@<ip_address> -p <port> 1 1 Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service. 7.4.5. Using a secondary network for SSH access You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH. Important Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method. See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options. Prerequisites You configured a secondary network such as Linux bridge or SR-IOV . You created a network attachment definition for a Linux bridge network or the SR-IOV Network Operator created a network attachment definition when you created an SriovNetwork object. 7.4.5.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. 7.4.5.2. Connecting to a VM attached to a secondary network by using SSH You can connect to a virtual machine (VM) attached to a secondary network by using SSH. Prerequisites You attached a VM to a secondary network with a DHCP server. You have an SSH client installed. Procedure Obtain the IP address of the VM by running the following command: USD oc describe vm <vm_name> -n <namespace> Example output Connect to the VM by running the following command: USD ssh <user_name>@<ip_address> -i <ssh_key> Example USD ssh [email protected] -i ~/.ssh/id_rsa_cloud-user Note You can also access a VM attached to a secondary network interface by using the cluster FQDN . 7.5. Editing virtual machines You can update a virtual machine (VM) configuration by using the OpenShift Container Platform web console. You can update the YAML file or the VirtualMachine details page . You can also edit a VM by using the command line. 7.5.1. Editing a virtual machine by using the command line You can edit a virtual machine (VM) by using the command line. Prerequisites You installed the oc CLI. Procedure Obtain the virtual machine configuration by running the following command: USD oc edit vm <vm_name> Edit the YAML configuration. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply vm <vm_name> -n <namespace> 7.5.2. Adding a disk to a virtual machine You can add a virtual disk to a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Disks tab, click Add disk . Specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the VM is running, you must restart the VM to apply the change. 7.5.2.1. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 7.5.3. Adding a secret, config map, or service account to a virtual machine You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click Configuration Environment . Click Add Config Map, Secret or Service Account . Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource. Optional: Click Reload to revert the environment to its last saved state. Click Save . Verification On the VirtualMachine details page, click Configuration Disks and verify that the resource is displayed in the list of disks. Restart the virtual machine by clicking Actions Restart . You can now mount the secret, config map, or service account as you would mount any other disk. Additional resources for config maps, secrets, and service accounts Understanding config maps Providing sensitive data to pods Understanding and creating service accounts 7.6. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 7.6.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.6.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.6.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm <vm_name> -n <namespace> Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. 7.6.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.7. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 7.7.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Delete . Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete . Optional: Select With grace period or clear Delete disks . Click Delete to permanently delete the virtual machine. 7.7.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes a VM in the current project. Specify the -n <project_name> option if the VM you want to delete is in a different project or namespace. 7.8. Exporting virtual machines You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes. You create a VirtualMachineExport custom resource (CR) by using the command line interface. Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes. 7.8.1. Creating a VirtualMachineExport custom resource You can create a VirtualMachineExport custom resource (CR) to export the following objects: Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM. VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR. PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use. The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route . The export server supports the following file formats: raw : Raw disk image file. gzip : Compressed disk image file. dir : PVC directory and files. tar.gz : Compressed PVC file. Prerequisites The VM must be shut down for a VM export. Procedure Create a VirtualMachineExport manifest to export a volume from a VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml : VirtualMachineExport example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3 1 Specify the appropriate API group: "kubevirt.io" for VirtualMachine . "snapshot.kubevirt.io" for VirtualMachineSnapshot . "" for PersistentVolumeClaim . 2 Specify VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim . 3 Optional. The default duration is 2 hours. Create the VirtualMachineExport CR: USD oc create -f example-export.yaml Get the VirtualMachineExport CR: USD oc get vmexport example-export -o yaml The internal and external links for the exported volumes are displayed in the status stanza: Output example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export 1 External links are accessible from outside the cluster by using an Ingress or Route . 2 Internal links are only valid inside the cluster. 7.8.2. Accessing exported virtual machine manifests After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine manifest and related information from the export server. Prerequisites You exported a virtual machine or VM snapshot by creating a VirtualMachineExport custom resource (CR). Note VirtualMachineExport objects that have the spec.source.kind: PersistentVolumeClaim parameter do not generate virtual machine manifests. Procedure To access the manifests, you must first copy the certificates from the source cluster to the target cluster. Log in to the source cluster. Save the certificates to the cacert.crt file by running the following command: USD oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the cacert.crt file to the target cluster. Decode the token in the source cluster and save it to the token_decode file by running the following command: USD oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the token_decode file to the target cluster. Get the VirtualMachineExport custom resource by running the following command: USD oc get vmexport <export_name> -o yaml Review the status.links stanza, which is divided into external and internal sections. Note the manifests.url fields within each section: Example output apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export 1 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the public certificate for the external URL's ingress or route. 2 Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token. 3 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the certificate for the internal URL's export server. Log in to the target cluster. Get the Secret manifest by running the following command: USD curl --cacert cacert.crt <secret_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <secret_manifest_url> with an auth-header-secret URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" Get the manifests of type: all , such as the ConfigMap and VirtualMachine manifests, by running the following command: USD curl --cacert cacert.crt <all_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <all_manifest_url> with a URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" steps You can now create the ConfigMap and VirtualMachine objects on the target cluster by using the exported manifests. 7.9. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 7.9.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. 7.9.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 7.9.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Virtualization VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge to its name. 7.9.4. Editing a standalone virtual machine instance using the web console You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a standalone VMI to open the VirtualMachineInstance details page. On the Details tab, click the pencil icon beside Annotations or Labels . Make the relevant changes and click Save . 7.9.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 7.9.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Click Actions Delete VirtualMachineInstance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 7.10. Controlling virtual machine states You can stop, start, restart, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 7.10.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Start VirtualMachine . To view comprehensive information about the selected virtual machine before you start it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Start . Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 7.10.2. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Stop VirtualMachine . To view comprehensive information about the selected virtual machine before you stop it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Stop . 7.10.3. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Restart . To view comprehensive information about the selected virtual machine before you restart it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Restart . 7.10.4. Pausing a virtual machine You can pause a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to pause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Pause VirtualMachine . To view comprehensive information about the selected virtual machine before you pause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Pause . 7.10.5. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Unpause VirtualMachine . To view comprehensive information about the selected virtual machine before you unpause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Unpause . 7.11. Using virtual Trusted Platform Module devices Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest. 7.11.1. About vTPM devices A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one. A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass attribute in the HyperConverged custom resource (CR): kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name> # ... Note The storage class must be of type Filesystem and support the ReadWriteMany (RWX) access mode. 7.11.2. Adding a vTPM device to a virtual machine Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured a Persistent Volume Claim (PVC) to use a storage class of type Filesystem that supports the ReadWriteMany (RWX) access mode. This is necessary for the vTPM device data to persist across VM reboots. Procedure Run the following command to update the VM configuration: USD oc edit vm <vm_name> -n <namespace> Edit the VM specification to add the vTPM device. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2 # ... 1 Adds the vTPM device to the VM. 2 Specifies that the vTPM device state persists after the VM is shut down. The default value is false . To apply your changes, save and exit the editor. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 7.12. Managing virtual machines with OpenShift Pipelines Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container. The Scheduling, Scale, and Performance (SSP) Operator integrates OpenShift Virtualization with OpenShift Pipelines. The SSP Operator includes tasks and example pipelines that allow you to: Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes Run commands in VMs Manipulate disk images with libguestfs tools Important Managing virtual machines with Red Hat OpenShift Pipelines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.12.1. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed OpenShift Pipelines . 7.12.2. Deploying the Scheduling, Scale, and Performance (SSP) resources The SSP Operator example Tekton Tasks and Pipelines are not deployed by default when you install OpenShift Virtualization. To deploy the SSP Operator's Tekton resources, enable the deployTektonTaskResources feature gate in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the spec.featureGates.deployTektonTaskResources field to true . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace> 1 featureGates: deployTektonTaskResources: true 2 # ... 1 The namespace where the pipelines are to be run. 2 The feature gate to be enabled to deploy Tekton resources by SSP operator. Note The tasks and example pipelines remain available even if you disable the feature gate later. Save your changes and exit the editor. 7.12.3. Virtual machine tasks supported by the SSP Operator The following table shows the tasks that are included as part of the SSP Operator. Table 7.3. Virtual machine tasks supported by the SSP Operator Task Description create-vm-from-manifest Create a virtual machine from a provided manifest or with virtctl . create-vm-from-template Create a virtual machine from a template. copy-template Copy a virtual machine template. modify-vm-template Modify a virtual machine template. modify-data-object Create or delete data volumes or data sources. cleanup-vm Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. disk-virt-customize Use the virt-customize tool to run a customization script on a target PVC. disk-virt-sysprep Use the virt-sysprep tool to run a sysprep script on a target PVC. wait-for-vmi-status Wait for a specific status of a virtual machine instance and fail or succeed based on the status. Note Virtual machine creation in pipelines now utilizes ClusterInstanceType and ClusterPreference instead of template-based tasks, which have been deprecated. The create-vm-from-template , copy-template , and modify-vm-template commands remain available but are not used in default pipeline tasks. 7.12.4. Example pipelines The SSP Operator includes the following example Pipeline manifests. You can run the example pipelines by using the web console or CLI. You might have to run more than one installer pipline if you need multiple versions of Windows. If you run more than one installer pipeline, each one requires unique parameters, such as the autounattend config map and base image name. For example, if you need Windows 10 and Windows 11 or Windows Server 2022 images, you have to run both the Windows efi installer pipeline and the Windows bios installer pipeline. However, if you need Windows 11 and Windows Server 2022 images, you have to run only the Windows efi installer pipeline. Windows EFI installer pipeline This pipeline installs Windows 11 or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process. Windows BIOS installer pipeline This pipeline installs Windows 10 into a new data volume from a Windows installation image, also called an ISO file. A custom answer file is used to run the installation process. Windows customize pipeline This pipeline clones the data volume of a basic Windows 10, 11, or Windows Server 2022 installation, customizes it by installing Microsoft SQL Server Express or Microsoft Visual Studio Code, and then creates a new image and template. Note The example pipelines use a config map file with sysprep predefined by OpenShift Container Platform and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep definition. 7.12.4.1. Running the example pipelines using the web console You can run the example pipelines from the Pipelines menu in the web console. Procedure Click Pipelines Pipelines in the side menu. Select a pipeline to open the Pipeline details page. From the Actions list, select Start . The Start Pipeline dialog is displayed. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status. 7.12.4.2. Running the example pipelines using the CLI Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline. Procedure To run the Windows 10 installer pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {} 1 Specify the URL for the Windows 10 64-bit ISO file. The product language must be English (United States). Apply the PipelineRun manifest: USD oc apply -f windows10-installer-run.yaml To run the Windows 10 customize pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {} Apply the PipelineRun manifest: USD oc apply -f windows10-customize-run.yaml 7.12.5. Additional resources Creating CI/CD solutions for applications using Red Hat OpenShift Pipelines Creating a Windows VM 7.13. Advanced virtual machine management 7.13.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 7.13.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 7.13.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 7.13.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 7.13.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. Note Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met. 7.13.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 7.13.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 # ... 7.13.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 7.13.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 7.13.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" # ... 7.13.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 7.13.3. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 7.13.3.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 7.13.3.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed. 7.13.4. Configuring the default CPU model Use the defaultCPUModel setting in the HyperConverged custom resource (CR) to define a cluster-wide default CPU model. The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster. If the VM does not have a defined CPU model: The defaultCPUModel is automatically set using the CPU model defined at the cluster-wide level. If both the VM and the cluster have a defined CPU model: The VM's CPU model takes precedence. If neither the VM nor the cluster have a defined CPU model: The host-model is automatically set using the CPU model defined at the host level. 7.13.4.1. Configuring the default CPU model Configure the defaultCPUModel by updating the HyperConverged custom resource (CR). You can change the defaultCPUModel while OpenShift Virtualization is running. Note The defaultCPUModel is case sensitive. Prerequisites Install the OpenShift CLI (oc). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the defaultCPUModel field to the CR and set the value to the name of a CPU model that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC" Apply the YAML file to your cluster. 7.13.5. Using UEFI mode for virtual machines You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode. 7.13.5.1. About UEFI mode for virtual machines Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 7.13.5.2. Booting virtual machines in UEFI mode You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode: Booting in UEFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 # ... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in UEFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 7.13.6. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 7.13.6.1. Prerequisites A Linux bridge must be connected . The PXE server must be connected to the same VLAN as the bridge. 7.13.6.2. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites A Linux bridge must be connected. The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", 2 "type": "bridge", 3 "bridge": "bridge-interface", 4 "macspoofchk": false, 5 "vlan": 100, 6 "preserveDefaultVlan": false 7 } 1 The name for the NetworkAttachmentDefinition object. 2 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 3 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin. 4 The name of the Linux bridge configured on the node. 5 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 6 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 7 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verification Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from OpenShift Container Platform. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 7.13.6.3. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. 7.13.7. Using huge pages with virtual machines You can use huge pages as backing memory for virtual machines in your cluster. 7.13.7.1. Prerequisites Nodes must have pre-allocated huge pages configured . 7.13.7.2. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages. 7.13.7.3. Configuring huge pages for virtual machines You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration. The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi . Note The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance. If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect. Prerequisites Nodes must have pre-allocated huge pages configured. Procedure In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain . The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi : kind: VirtualMachine # ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 # ... 1 The total amount of memory requested for the virtual machine. This value must be divisible by the page size. 2 The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi . The page size must be smaller than the requested memory. Apply the virtual machine configuration: USD oc apply -f <virtual_machine>.yaml 7.13.8. Enabling dedicated resources for virtual machines To improve performance, you can dedicate node resources, such as CPU, to a virtual machine. 7.13.8.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 7.13.8.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. The virtual machine must be powered off. 7.13.8.3. Enabling dedicated resources for a virtual machine You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. On the Configuration Scheduling tab, click the edit icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 7.13.9. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 7.13.9.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 7.13.9.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 7.13.9.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 7.13.9.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 7.13.9.5. Scheduling virtual machines with a custom scheduler You can use a custom scheduler to schedule a virtual machine (VM) on a node. Prerequisites A secondary scheduler is configured for your cluster. Procedure Add the custom scheduler to the VM configuration by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio # ... 1 The name of the custom scheduler. If the schedulerName value does not match an existing scheduler, the virt-launcher pod stays in a Pending state until the specified scheduler is found. Verification Verify that the VM is using the custom scheduler specified in the VirtualMachine manifest by checking the virt-launcher pod events: View the list of pods in your cluster by entering the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m Run the following command to display the pod events: USD oc describe pod virt-launcher-vm-fedora-dpc87 The value of the From field in the output verifies that the scheduler name matches the custom scheduler specified in the VirtualMachine manifest: Example output [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...] Additional resources Deploying a secondary scheduler 7.13.10. Configuring PCI passthrough The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine (VM). When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system. Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI). 7.13.10.1. Preparing nodes for GPU passthrough You can prevent GPU operands from deploying on worker nodes that you designated for GPU passthrough. 7.13.10.1.1. Preventing NVIDIA GPU operands from deploying on nodes If you use the NVIDIA GPU Operator in your cluster, you can apply the nvidia.com/gpu.deploy.operands=false label to nodes that you do not want to configure for GPU or vGPU operands. This label prevents the creation of the pods that configure GPU or vGPU operands and terminates the pods if they already exist. Prerequisites The OpenShift CLI ( oc ) is installed. Procedure Label the node by running the following command: USD oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1 1 Replace <node_name> with the name of a node where you do not want to install the NVIDIA GPU operands. Verification Verify that the label was added to the node by running the following command: USD oc describe node <node_name> Optional: If GPU operands were previously deployed on the node, verify their removal. Check the status of the pods in the nvidia-gpu-operator namespace by running the following command: USD oc get pods -n nvidia-gpu-operator Example output NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d Monitor the pod status until the pods with Terminating status are removed: USD oc get pods -n nvidia-gpu-operator Example output NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d 7.13.10.2. Preparing host devices for PCI passthrough 7.13.10.2.1. About preparing a host device for PCI passthrough To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR. 7.13.10.2.2. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites You have cluster administrator permissions. Your CPU hardware is Intel or AMD. You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 7.13.10.2.3. Binding PCI devices to the VFIO driver To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. Prerequisites You added kernel arguments to enable IOMMU for the CPU. Procedure Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device. USD lspci -nnv | grep -i nvidia Example output 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) Create a Butane config file, 100-worker-vfiopci.bu , binding the PCI device to the VFIO driver. Note See "Creating machine configs with Butane" for information about Butane. Example variant: openshift version: 4.14.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci 1 Applies the new kernel argument only to worker nodes. 2 Specify the previously determined vendor-ID value ( 10de ) and the device-ID value ( 1eb8 ) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. 3 The file that loads the vfio-pci kernel module on the worker nodes. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml , containing the configuration to be delivered to the worker nodes: USD butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml Apply the MachineConfig object to the worker nodes: USD oc apply -f 100-worker-vfiopci.yaml Verify that the MachineConfig object was added. USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s Verification Verify that the VFIO driver is loaded. USD lspci -nnk -d 10de: The output confirms that the VFIO driver is being used. Example output 7.13.10.2.4. Exposing PCI host devices in the cluster using the CLI To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 # ... 1 The host devices that are permitted to be used in the cluster. 2 The list of PCI devices available on the node. 3 The vendor-ID and the device-ID required to identify the PCI device. 4 The name of a PCI host device. 5 Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin. Note The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization. Save your changes and exit the editor. Verification Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100 , nvidia.com/TU104GL_Tesla_T4 , and intel.com/qat resource names. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 7.13.10.2.5. Removing PCI host devices from the cluster using the CLI To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector , resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted. Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" # ... Save your changes and exit the editor. Verification Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 7.13.10.3. Configuring virtual machines for PCI passthrough After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines. 7.13.10.3.1. Assigning a PCI device to a virtual machine When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. Procedure Assign the PCI device to a virtual machine as a host device. Example apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1 1 The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device. Verification Use the following command to verify that the host device is available from the virtual machine. USD lspci -nnk | grep NVIDIA Example output USD 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) 7.13.10.4. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS Managing file permissions Postinstallation machine configuration tasks 7.13.11. Configuring virtual GPUs If you have graphics processing unit (GPU) cards, OpenShift Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs). 7.13.11.1. About using virtual GPUs with OpenShift Virtualization Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters. Note Refer to your hardware vendor's documentation for functionality and support details. Mediated device A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests. 7.13.11.2. Preparing hosts for mediated devices You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices. 7.13.11.2.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites You have cluster administrator permissions. Your CPU hardware is Intel or AMD. You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 7.13.11.3. Configuring the NVIDIA GPU Operator You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OpenShift Virtualization. Note The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase. 7.13.11.3.1. About using the NVIDIA GPU Operator You can use the NVIDIA GPU Operator with OpenShift Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks that are required when preparing nodes for GPU workloads. Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads. 7.13.11.3.2. Options for configuring mediated devices There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OpenShift Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator. Using the NVIDIA GPU Operator to configure mediated devices This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OpenShift Virtualization in the NVIDIA documentation. Using OpenShift Virtualization to configure mediated devices This method, which is tested by Red Hat, uses OpenShift Virtualization's capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices. When using the OpenShift Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation . However, this method differs from the NVIDIA documentation in the following ways: You must not overwrite the default disableMDEVConfiguration: false setting in the HyperConverged custom resource (CR). Important Setting this feature gate as described in the NVIDIA documentation prevents OpenShift Virtualization from configuring mediated devices. You must configure your ClusterPolicy manifest so that it matches the following example: Example manifest kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: "true" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6 1 Set this value to false . Not required for VMs. 2 Set this value to true . Required for using vGPUs with VMs. 3 Substitute <vgpu_container_registry> with your registry value. 4 Set this value to false to allow OpenShift Virtualization to configure mediated devices instead of the NVIDIA GPU Operator. 5 Set this value to false to prevent discovery and advertising of the vGPU devices to the kubelet. 6 Set this value to false to prevent loading the vfio-pci driver. Instead, follow the OpenShift Virtualization documentation to configure PCI passthrough. Additional resources Configuring PCI passthrough 7.13.11.4. How vGPUs are assigned to nodes For each physical device, OpenShift Virtualization configures the following values: A single mdev type. The maximum number of instances of the selected mdev type. The cluster architecture affects how devices are created and assigned to nodes. Large cluster with multiple cards per node On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example: # ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 # ... In this scenario, each node has two cards, both of which support the following vGPU types: nvidia-105 # ... nvidia-108 nvidia-217 nvidia-299 # ... On each node, OpenShift Virtualization creates the following vGPUs: 16 vGPUs of type nvidia-105 on the first card. 2 vGPUs of type nvidia-108 on the second card. One node has a single card that supports more than one requested vGPU type OpenShift Virtualization uses the supported type that comes first on the mediatedDeviceTypes list. For example, the card on a node card supports nvidia-223 and nvidia-224 . The following mediatedDeviceTypes list is configured: # ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224 # ... In this example, OpenShift Virtualization uses the nvidia-223 type. 7.13.11.5. Managing mediated devices Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices. 7.13.11.5.1. Creating and exposing mediated devices As an administrator, you can create mediated devices and expose them to the cluster by editing the HyperConverged custom resource (CR). Prerequisites You enabled the Input-Output Memory Management Unit (IOMMU) driver. If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices. If you use NVIDIA cards, you installed the NVIDIA GRID driver . Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example 7.2. Example configuration file with mediated devices configured apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q # ... Create mediated devices by adding them to the spec.mediatedDevicesConfiguration stanza: Example YAML snippet # ... spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value> # ... 1 Required: Configures global settings for the cluster. 2 Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDeviceTypes configuration. 3 Required if you use nodeMediatedDeviceTypes . Overrides the global mediatedDeviceTypes configuration for the specified nodes. 4 Required if you use nodeMediatedDeviceTypes . Must include a key:value pair. Important Before OpenShift Virtualization 4.14, the mediatedDeviceTypes field was named mediatedDevicesTypes . Ensure that you use the correct field name when configuring mediated devices. Identify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the HyperConverged CR in the step. Find the resourceName value by running the following command: USD oc get USDNODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))' Find the mdevNameSelector value by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name , substituting the correct values for your system. For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q . Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type. Expose the mediated devices to the cluster by adding the mdevNameSelector and resourceName values to the spec.permittedHostDevices.mediatedDevices stanza of the HyperConverged CR: Example YAML snippet # ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2 # ... 1 Exposes the mediated devices that map to this value on the host. 2 Matches the resource name that is allocated on the node. Save your changes and exit the editor. Verification Optional: Confirm that a device was added to a specific node by running the following command: USD oc describe node <node_name> 7.13.11.5.2. About changing and removing mediated devices You can reconfigure or remove mediated devices in several ways: Edit the HyperConverged CR and change the contents of the mediatedDeviceTypes stanza. Change the node labels that match the nodeMediatedDeviceTypes node selector. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Note If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas. 7.13.11.5.3. Removing mediated devices from the cluster To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q 1 To remove the nvidia-231 device type, delete it from the mediatedDeviceTypes array. 2 To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field. Save your changes and exit the editor. 7.13.11.6. Using mediated devices You can assign mediated devices to one or more virtual machines. 7.13.11.6.1. Assigning a vGPU to a VM by using the CLI Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs). Prerequisites The mediated device is configured in the HyperConverged custom resource. The VM is stopped. Procedure Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest: Example virtual machine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2 1 The resource name associated with the mediated device. 2 A name to identify the device on the VM. Verification To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest: USD lspci -nnk | grep <device_name> 7.13.11.6.2. Assigning a vGPU to a VM by using the web console You can assign virtual GPUs to virtual machines by using the OpenShift Container Platform web console. Note You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems. Prerequisites The vGPU is configured as a mediated device in your cluster. To view the devices that are connected to your cluster, click Compute Hardware Devices from the side menu. The VM is stopped. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Select the VM that you want to assign the device to. On the Details tab, click GPU devices . Click Add GPU device . Enter an identifying value in the Name field. From the Device name list, select the device that you want to add to the VM. Click Save . Verification To confirm that the devices were added to the VM, click the YAML tab and review the VirtualMachine configuration. Mediated devices are added to the spec.domain.devices stanza. 7.13.11.7. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS 7.13.12. Enabling descheduler evictions on virtual machines You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node. Important Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.13.12.1. Descheduler profiles Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load. DevPreviewLongLifecycle This profile balances resource usage between nodes and enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). 7.13.12.2. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section and select DevPreviewLongLifecycle . The AffinityAndTaints profile is enabled by default. Important The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). 7.13.12.3. Enabling descheduler evictions on a virtual machine (VM) After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR). Prerequisites Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI ( oc ). Ensure that the VM is not running. Procedure Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR: apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true" If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1 1 By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . The descheduler is now enabled on the VM. 7.13.12.4. Additional resources Evicting pods using the descheduler 7.13.13. About high availability for virtual machines You can enable high availability for virtual machines (VMs) by manually deleting a failed node to trigger VM failover or by configuring remediating nodes. Manually deleting a failed node If a node fails and machine health checks are not deployed on your cluster, virtual machines with runStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. See Deleting a failed node to trigger virtual machine failover . Configuring remediating nodes You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 7.13.14. Virtual machine control plane tuning OpenShift Virtualization offers the following tuning options at the control-plane level: The highBurst profile, which uses fixed QPS and burst rates, to create hundreds of virtual machines (VMs) in one batch Migration setting adjustment based on workload type 7.13.14.1. Configuring a highBurst profile Use the highBurst profile to create and maintain a large number of virtual machines (VMs) in one cluster. Procedure Apply the following patch to enable the highBurst tuning policy profile: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]' Verification Run the following command to verify the highBurst tuning policy profile is enabled: USD oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range USDconfig, \ USDvalue := .spec.configuration}} {{if eq USDconfig "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{"\n"}} 7.13.15. Assigning compute resources In OpenShift Virtualization, compute resources assigned to virtual machines (VMs) are backed by either guaranteed CPUs or time-sliced CPU shares. Guaranteed CPUs, also known as CPU reservation, dedicate CPU cores or threads to a specific workload, which makes them unavailable to any other workload. Assigning guaranteed CPUs to a VM ensures that the VM will have sole access to a reserved physical CPU. Enable dedicated resources for VMs to use a guaranteed CPU. Time-sliced CPUs dedicate a slice of time on a shared physical CPU to each workload. You can specify the size of the slice during VM creation, or when the VM is offline. By default, each vCPU receives 100 milliseconds, or 1/10 of a second, of physical CPU time. The type of CPU reservation depends on the instance type or VM configuration. 7.13.15.1. Overcommitting CPU resources Time-slicing allows multiple virtual CPUs (vCPUs) to share a single physical CPU. This is known as CPU overcommitment . Guaranteed VMs can not be overcommitted. Configure CPU overcommitment to prioritize VM density over performance when assigning CPUs to VMs. With a higher CPU over-commitment of vCPUs, more VMs fit onto a given node. 7.13.15.2. Setting the CPU allocation ratio The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs. For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices. To change the default number of vCPUs mapped to each physical CPU, set the vmiCPUAllocationRatio value in the HyperConverged CR. The pod CPU request is calculated by multiplying the number of vCPUs by the reciprocal of the CPU allocation ratio. For example, if vmiCPUAllocationRatio is set to 10, OpenShift Virtualization will request 10 times fewer CPUs on the pod for that VM. Procedure Set the vmiCPUAllocationRatio value in the HyperConverged CR to define a node CPU allocation ratio. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the vmiCPUAllocationRatio : ... spec: resourceRequirements: vmiCPUAllocationRatio: 1 1 # ... 1 When vmiCPUAllocationRatio is set to 1 , the maximum amount of vCPUs are requested for the pod. 7.13.15.3. Additional resources Pod Quality of Service Classes 7.14. VM disks 7.14.1. Hot-plugging VM disks You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI). Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks. A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Note Each VM has a virtio-scsi controller so that hot plugged disks can use the scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks. Regular virtio is not available for hot plugged disks because it is not scalable. Each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand. 7.14.1.1. Hot plugging and hot unplugging a disk by using the web console You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the OpenShift Container Platform web console. The hot plugged disk remains attached to the VM until you unplug it. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have a data volume or persistent volume claim (PVC) available for hot plugging. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a running VM to view its details. On the VirtualMachine details page, click Configuration Disks . Add a hot plugged disk: Click Add disk . In the Add disk (hot plugged) window, select the disk from the Source list and click Save . Optional: Unplug a hot plugged disk: Click the options menu beside the disk and select Detach . Click Detach . Optional: Make a hot plugged disk persistent: Click the options menu beside the disk and select Make persistent . Reboot the VM to apply the change. 7.14.1.2. Hot plugging and hot unplugging a disk by using the command line You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have at least one data volume or persistent volume claim (PVC) available for hot plugging. Procedure Hot plug a disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC. Hot unplug a disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> 7.14.2. Expanding virtual machine disks You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes. You cannot reduce the size of a VM disk. 7.14.2.1. Expanding a VM disk PVC You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead. Procedure Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand: USD oc edit pvc <pvc_name> Update the disk size: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 # ... 1 Specify the new disk size. Additional resources for volume expansion Extending a basic volume in Windows Extending an existing file system partition without destroying data in Red Hat Enterprise Linux Extending a logical volume and its file system online in Red Hat Enterprise Linux 7.14.2.2. Expanding available virtual storage by adding blank data volumes You can expand the available storage of a virtual machine (VM) by adding blank data volumes. Prerequisites You must have at least one persistent volume. Procedure Create a DataVolume manifest as shown in the following example: Example DataVolume manifest apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: "<storage_class>" 2 1 Specify the amount of available space requested for the data volume. 2 Optional: If you do not specify a storage class, the default storage class is used. Create the data volume by running the following command: USD oc create -f <blank-image-datavolume>.yaml Additional resources for data volumes Configuring preallocation mode for data volumes Managing data volume annotations
[ "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 2 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 3 chpasswd: { expire: False } name: cloudinitdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 5 registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\" 1", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl console <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: example-vm-disk spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: example-vm spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: example-volume name: example-vm-disk - cloudInitConfigDrive: <.> userData: |- #cloud-config user: cloud-user password: <password> chpasswd: { expire: False } name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: authorized-keys <.> --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: | MIIEpQIBAAKCAQEAulqb/Y... <.>", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: example-vm-disk spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: example-vm spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: example-volume name: example-vm-disk - cloudInitConfigDrive: <.> userData: |- #cloud-config user: cloud-user password: <password> chpasswd: { expire: False } runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"user1\",\"user2\",\"fedora\"] <.> source: secret: secretName: authorized-keys <.> --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: | MIIEpQIBAAKCAQEAulqb/Y... <.>", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"user1\",\"user2\",\"fedora\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace> 1 featureGates: deployTektonTaskResources: true 2", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {}", "oc apply -f windows10-installer-run.yaml", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {}", "oc apply -f windows10-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1", "oc describe node <node_name>", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.14.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: \"true\" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2", "lspci -nnk | grep <device_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: resourceRequirements: vmiCPUAllocationRatio: 1 1", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/virtual-machines
14.4. Locking Operations
14.4. Locking Operations 14.4.1. About the LockManager The LockManager component is responsible for locking an entry before a write process initiates. The LockManager uses a LockContainer to locate, hold and create locks. There are two types of LockContainers JBoss Data Grid uses internally and their choice is dependent on the useLockStriping setting. The first type offers support for lock striping while the second type supports one lock per entry. See Also: Chapter 15, Set Up Lock Striping Report a bug 14.4.2. About Lock Acquisition Red Hat JBoss Data Grid acquires remote locks lazily by default. The node running a transaction locally acquires the lock while other cluster nodes attempt to lock cache keys that are involved in a two phase prepare/commit phase. JBoss Data Grid can lock cache keys in a pessimistic manner either explicitly or implicitly. Report a bug 14.4.3. About Concurrency Levels Concurrency refers to the number of threads simultaneously interacting with the data grid. In Red Hat JBoss Data Grid, concurrency levels refer to the number of concurrent threads used within a lock container. In JBoss Data Grid, concurrency levels determine the size of each striped lock container. Additionally, concurrency levels tune all related JDK ConcurrentHashMap based collections, such as those internal to DataContainers . Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-locking_operations
26.2. Configuration Examples
26.2. Configuration Examples 26.2.1. Mapping SELinux users to IdM users The following procedure shows how to create a new SELinux mapping and how to add a new IdM user to this mapping. Procedure 26.1. How to Add a User to an SELinux Mapping To create a new SELinux mapping, enter the following command where SELinux_mapping is the name of the new SELinux mapping and the --selinuxuser option specifies a particular SELinux user: Enter the following command to add an IdM user with the tuser user name to the SELinux mapping: To add a new host named ipaclient.example.com to the SELinux mapping, enter the following command: The tuser user gets the staff_u:s0-s0:c0.c1023 label when logged in to the ipaclient.example.com host:
[ "~]USD ipa selinuxusermap-add SELinux_mapping --selinuxuser=staff_u:s0-s0:c0.c1023", "~]USD ipa selinuxusermap-add-user --users=tuser SELinux_mapping", "~]USD ipa selinuxusermap-add-host --hosts=ipaclient.example.com SELinux_mapping", "[tuser@ipa-client]USD id -Z staff_u:staff_r:staff_t:s0-s0:c0.c1023" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-identity_management-configuration_examples
Chapter 13. log
Chapter 13. log 13.1. log:clear 13.1.1. Description Clear log entries. 13.1.2. Syntax log:clear [options] 13.1.3. Options Name Description --help Display this help message 13.2. log:display 13.2.1. Description Displays log entries. 13.2.2. Syntax log:display [options] [logger] 13.2.3. Arguments Name Description logger The name of the logger. This can be ROOT, ALL, or the name of a logger specified in the org.ops4j.pax.logger.cfg file. 13.2.4. Options Name Description -p Pattern for formatting the output --help Display this help message --no-color Disable syntax coloring of log events -n Number of entries to display -l, --level The minimal log level to display 13.3. log:exception-display 13.3.1. Description Displays the last occurred exception from the log. 13.3.2. Syntax log:exception-display [options] [logger] 13.3.3. Arguments Name Description logger The name of the logger. This can be ROOT, ALL, or the name of a logger specified in the org.ops4j.pax.logger.cfg file. 13.3.4. Options Name Description --help Display this help message 13.4. log:get 13.4.1. Description Shows the currently set log level. 13.4.2. Syntax log:get [options] [logger] 13.4.3. Arguments Name Description logger The name of the logger or ALL (default) 13.4.4. Options Name Description --help Display this help message --no-format Disable table rendered output 13.5. log:load-test 13.5.1. Description Load test log. 13.5.2. Syntax log:load-test [options] 13.5.3. Options Name Description --help Display this help message --messaged --threads 13.6. log:log 13.6.1. Description Log a message. 13.6.2. Syntax log:log [options] message 13.6.3. Arguments Name Description message The message to log 13.6.4. Options Name Description --help Display this help message --level, -l The level the message will be logged at 13.7. log:set 13.7.1. Description Sets the log level. 13.7.2. Syntax log:set [options] level [logger] 13.7.3. Arguments Name Description level The log level to set (TRACE, DEBUG, INFO, WARN, ERROR) or DEFAULT to unset logger Logger name or ROOT (default) 13.7.4. Options Name Description --help Display this help message 13.8. log:tail 13.8.1. Description Continuously display log entries. Use ctrl-c to quit this command 13.8.2. Syntax log:tail [options] [logger] 13.8.3. Arguments Name Description logger The name of the logger. This can be ROOT, ALL, or the name of a logger specified in the org.ops4j.pax.logger.cfg file. 13.8.4. Options Name Description -p Pattern for formatting the output --help Display this help message --no-color Disable syntax coloring of log events -n Number of entries to display -l, --level The minimal log level to display
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/log
Chapter 8. Configuring automatic upgrades for secured clusters
Chapter 8. Configuring automatic upgrades for secured clusters You can automate the upgrade process for each secured cluster and view the upgrade status from the RHACS portal. Automatic upgrades make it easier to stay up-to-date by automating the manual task of upgrading each secured cluster. With automatic upgrades, after you upgrade Central; Sensor, Collector, and Compliance services in all secured clusters, automatically upgrade to the latest version. Red Hat Advanced Cluster Security for Kubernetes also enables centralized management of all your secured clusters from within the RHACS portal. The new Clusters view displays information about all your secured clusters, the Sensor version for every cluster, and upgrade status messages. You can also use this view to selectively upgrade your secured clusters or change their configuration. Note The automatic upgrade feature is enabled by default. If you are using a private image registry, you must first push the Sensor and Collector images to your private registry. The Sensor must run with the default RBAC permissions. Automatic upgrades do not preserve any patches that you have made to any Red Hat Advanced Cluster Security for Kubernetes services running in your cluster. However, it preserves all labels and annotations that you have added to any Red Hat Advanced Cluster Security for Kubernetes object. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you remove this account, Sensor does not have enough permissions, and you must complete future upgrades manually. 8.1. Enabling automatic upgrades You can enable automatic upgrades for all secured clusters to automatically upgrade Collector and Compliance services in all secured clusters to the latest version. Procedure In the RHACS portal, go to Platform Configuration Clusters . Turn on the Automatically upgrade secured clusters toggle. Note For new installations, the Automatically upgrade secured clusters toggle is enabled by default. 8.2. Disabling automatic upgrades If you want to manage your secured cluster upgrades manually, you can disable automatic upgrades. Procedure In the RHACS portal, go to Platform Configuration Clusters . Turn off the Automatically upgrade secured clusters toggle. Note For new installations, the Automatically upgrade secured clusters toggle is enabled by default. 8.3. Automatic upgrade status The Clusters view lists all clusters and their upgrade statuses. Upgrade status Description Up to date with Central version The secured cluster is running the same version as Central. Upgrade available A new version is available for the Sensor and Collector. Upgrade failed. Retry upgrade. The automatic upgrade failed. Secured cluster version is not managed by RHACS. External tools such as Helm or the Operator control the secured cluster version. You can upgrade the secured cluster using external tools. Pre-flight checks complete The upgrade is in progress. Before performing automatic upgrade, the upgrade installer runs a pre-flight check. During the pre-flight check, the installer verifies if certain conditions are satisfied and then only starts the upgrade process. 8.4. Automatic upgrade failure Sometimes, Red Hat Advanced Cluster Security for Kubernetes automatic upgrades might fail to install. When an upgrade fails, the status message for the secured cluster changes to Upgrade failed. Retry upgrade . To view more information about the failure and understand why the upgrade failed, you can check the secured cluster row in the Clusters view. Some common reasons for the failure are: The sensor-upgrader deployment might not have run because of a missing or a non-schedulable image. The pre-flight checks may have failed, either because of insufficient RBAC permissions or because the cluster state is not recognizable. This can happen if you have edited Red Hat Advanced Cluster Security for Kubernetes service configurations or the auto-upgrade.stackrox.io/component label is missing. There might be errors in executing the upgrade. If this happens, the upgrade installer automatically attempts to roll back the upgrade. Note Sometimes, the rollback can fail as well. For such cases view the cluster logs to identify the issue or contact support. After you identify and fix the root cause for the upgrade failure, you can use the Retry Upgrade option to upgrade your secured cluster. 8.5. Upgrading secured clusters manually from the RHACS portal If you do not want to enable automatic upgrades, you can manage your secured cluster upgrades by using the Clusters view. To manually trigger upgrades for your secured clusters: Procedure In the RHACS portal, go to Platform Configuration Clusters . Select the Upgrade available option under the Upgrade status column for the cluster you want to upgrade. To upgrade multiple clusters at once, select the checkboxes in the Cluster column for the clusters you want to update. Click Upgrade .
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/configure-automatic-upgrades
Chapter 3. General Updates
Chapter 3. General Updates abrt no longer missing a dependency on python-argparse A previously missing dependency of the abrt packages on the python-argparse package resulting in errors like ImportError: No module named argparse has been fixed. This problem usually occurred if customers upgraded from an earlier version of Red Hat Enterprise Linux, or during a fresh installation if customers removed the nfs-utils or ipa-client packages. (BZ# 1246539 ) rds-stress can now correctly send messages of varying size The rds-stress command previously could not send Reliable Datagram Sockets (RDS) messages of varying sizes if RDMA was enabled due to bugs in both the kernel and in the rds-tools package. These bugs have been fixed and you can now send RDS messages of any size as expected. (BZ#746716)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_general_updates
Chapter 1. Installing and configuring the Hot Rod .NET/C# client
Chapter 1. Installing and configuring the Hot Rod .NET/C# client Install the Hot Rod .NET/C# client on Microsoft Windows systems where you use .NET Framework to interact with Data Grid clusters via the RemoteCache API. 1.1. Installing Hot Rod .NET/C# clients Data Grid provides an installation package to install the Hot Rod .NET/C# client on Windows. Prerequisites Any operating system on which Microsoft supports the .NET Framework .NET Framework 4.6.2 or later Windows Visual Studio 2015 or later Procedure Download redhat-datagrid-<version>-hotrod-dotnet-client.msi from the Data Grid Software Downloads . Launch the MSI installer for the Hot Rod .NET/C# client and follow the interactive wizard through the installation process. 1.2. Configuration and Remote Cache Manager APIs Use the ConfigurationBuilder API to configure Hot Rod .NET/C# client connections and the RemoteCacheManager API to obtain and configure remote caches. Basic configuration using Infinispan.HotRod; using Infinispan.HotRod.Config; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace simpleapp { class Program { static void Main(string[] args) { ConfigurationBuilder builder = new ConfigurationBuilder(); // Connect to a server at localhost with the default port. builder.AddServer() .Host(args.Length > 1 ? args[0] : "127.0.0.1") .Port(args.Length > 2 ? int.Parse(args[1]) : 11222); Configuration config = builder.Build(); // Create and start a RemoteCacheManager to interact with caches. RemoteCacheManager remoteManager = new RemoteCacheManager(conf); remoteManager.Start(); IRemoteCache<string,string> cache=remoteManager.GetCache<string, string>(); cache.Put("key", "value"); Console.WriteLine("key = {0}", cache.Get("key")); remoteManager.Stop(); } } } Authentication ConfigurationBuilder builder = new ConfigurationBuilder(); // Add a server with specific connection timeouts builder.AddServer().Host("127.0.0.1").Port(11222).ConnectionTimeout(90000).SocketTimeout(900); // ConfigurationBuilder has fluent interface, options can be appended in chain. // Enabling authentication with server name "node0", // sasl mech "PLAIN", user "supervisor", password "aPassword", security realm "aRealm" builder.Security().Authentication().Enable().ServerFQDN("node0") .SaslMechanism("PLAIN").SetupCallback("supervisor", "aPassword", "aRealm"); Configuration c = conf.Build(); Encryption ConfigurationBuilder builder = new ConfigurationBuilder(); builder.AddServer().Host("127.0.0.1").Port(11222); // Get configuration builder for encryption SslConfigurationBuilder sslBuilder = conf.Ssl(); // Enable encryption and provide client certificate sslBuilder.Enable().ClientCertificateFile("clientCertFilename"); // Provide server cert if server needs to be verified sslBuilder.ServerCAFile("serverCertFilename"); Configuration c = conf.Build(); Cross-site failover ConfigurationBuilder builder = new ConfigurationBuilder(); builder.AddServer().Host("127.0.0.1").Port(11222); // Configure a remote cluster and node when using cross-site failover. builder.AddCluster("nyc").AddClusterNode("192.0.2.0", 11322); Near caching ConfigurationBuilder builder = new ConfigurationBuilder(); builder.AddServer().Host("127.0.0.1").Port(11222); // Enable near-caching for the client. builder.NearCache().Mode(NearCacheMode.INVALIDATED).MaxEntries(10);
[ "using Infinispan.HotRod; using Infinispan.HotRod.Config; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace simpleapp { class Program { static void Main(string[] args) { ConfigurationBuilder builder = new ConfigurationBuilder(); // Connect to a server at localhost with the default port. builder.AddServer() .Host(args.Length > 1 ? args[0] : \"127.0.0.1\") .Port(args.Length > 2 ? int.Parse(args[1]) : 11222); Configuration config = builder.Build(); // Create and start a RemoteCacheManager to interact with caches. RemoteCacheManager remoteManager = new RemoteCacheManager(conf); remoteManager.Start(); IRemoteCache<string,string> cache=remoteManager.GetCache<string, string>(); cache.Put(\"key\", \"value\"); Console.WriteLine(\"key = {0}\", cache.Get(\"key\")); remoteManager.Stop(); } } }", "ConfigurationBuilder builder = new ConfigurationBuilder(); // Add a server with specific connection timeouts builder.AddServer().Host(\"127.0.0.1\").Port(11222).ConnectionTimeout(90000).SocketTimeout(900); // ConfigurationBuilder has fluent interface, options can be appended in chain. // Enabling authentication with server name \"node0\", // sasl mech \"PLAIN\", user \"supervisor\", password \"aPassword\", security realm \"aRealm\" builder.Security().Authentication().Enable().ServerFQDN(\"node0\") .SaslMechanism(\"PLAIN\").SetupCallback(\"supervisor\", \"aPassword\", \"aRealm\"); Configuration c = conf.Build();", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.AddServer().Host(\"127.0.0.1\").Port(11222); // Get configuration builder for encryption SslConfigurationBuilder sslBuilder = conf.Ssl(); // Enable encryption and provide client certificate sslBuilder.Enable().ClientCertificateFile(\"clientCertFilename\"); // Provide server cert if server needs to be verified sslBuilder.ServerCAFile(\"serverCertFilename\"); Configuration c = conf.Build();", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.AddServer().Host(\"127.0.0.1\").Port(11222); // Configure a remote cluster and node when using cross-site failover. builder.AddCluster(\"nyc\").AddClusterNode(\"192.0.2.0\", 11322);", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.AddServer().Host(\"127.0.0.1\").Port(11222); // Enable near-caching for the client. builder.NearCache().Mode(NearCacheMode.INVALIDATED).MaxEntries(10);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_.net_client_guide/installation-configuration
Release notes for Red Hat build of OpenJDK 11.0.9
Release notes for Red Hat build of OpenJDK 11.0.9 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.9/index
Part IV. Reference material
Part IV. Reference material
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/reference_material
Using the AMQ C++ Client
Using the AMQ C++ Client Red Hat AMQ 2020.Q4 For Use with AMQ Clients 2.8
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/index
Chapter 17. Enabling faster client IO or recovery IO during OSD backfill
Chapter 17. Enabling faster client IO or recovery IO during OSD backfill During a maintenance window, you may want to favor either client IO or recovery IO. Favoring recovery IO over client IO will significantly reduce OSD recovery time. The valid recovery profile options are balanced , high_client_ops , and high_recovery_ops . Set the recovery profile using the following procedure. Prerequisites Download the odf-cli tool from the customer portal . Procedure Check the current recovery profile: Modify the recovery profile: Replace option with either balanced , high_client_ops , or high_recovery_ops . Verify the updated recovery profile:
[ "odf get recovery-profile", "odf set recovery-profile <option>", "odf get recovery-profile" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/enabling-faster-client-io-or-recoveryy-io-during-osd-backfill_rhodf
function::proc_mem_string_pid
function::proc_mem_string_pid Name function::proc_mem_string_pid - Human readable string of process memory usage Synopsis Arguments pid The pid of process to examine Description Returns a human readable string showing the size, rss, shr, txt and data of the memory used by the given process. For example " size: 301m, rss: 11m, shr: 8m, txt: 52k, data: 2248k " .
[ "function proc_mem_string_pid:string(pid:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-string-pid
Chapter 11. Using Kerberos
Chapter 11. Using Kerberos Maintaining system security and integrity within a network is critical, and it encompasses every user, application, service, and server within the network infrastructure. It requires an understanding of everything that is running on the network and the manner in which these services are used. At the core of maintaining this security is maintaining access to these applications and services and enforcing that access. Kerberos is an authentication protocol significantly safer than normal password-based authentication. With Kerberos, passwords are never sent over the network, even when services are accessed on other machines. Kerberos provides a mechanism that allows both users and machines to identify themselves to network and receive defined, limited access to the areas and services that the administrator configured. Kerberos authenticates entities by verifying their identity, and Kerberos also secures this authenticating data so that it cannot be accessed and used or tampered with by an outsider. 11.1. About Kerberos Kerberos uses symmetric-key cryptography [3] to authenticate users to network services, which means passwords are never actually sent over the network. Consequently, when users authenticate to network services using Kerberos, unauthorized users attempting to gather passwords by monitoring network traffic are effectively thwarted. 11.1.1. The Basics of How Kerberos Works Most conventional network services use password-based authentication schemes, where a user supplies a password to access a given network server. However, the transmission of authentication information for many services is unencrypted. For such a scheme to be secure, the network has to be inaccessible to outsiders, and all computers and users on the network must be trusted and trustworthy. With simple, password-based authentication, a network that is connected to the Internet cannot be assumed to be secure. Any attacker who gains access to the network can use a simple packet analyzer, or packet sniffer , to intercept user names and passwords, compromising user accounts and, therefore, the integrity of the entire security infrastructure. Kerberos eliminates the transmission of unencrypted passwords across the network and removes the potential threat of an attacker sniffing the network. Rather than authenticating each user to each network service separately as with simple password authentication, Kerberos uses symmetric encryption and a trusted third party (a key distribution center or KDC) to authenticate users to a suite of network services. The computers managed by that KDC and any secondary KDCs constitute a realm . When a user authenticates to the KDC, the KDC sends a set of credentials (a ticket ) specific to that session back to the user's machine, and any Kerberos-aware services look for the ticket on the user's machine rather than requiring the user to authenticate using a password. As shown in Figure 11.1, "Kerberos Authentication" , each user is identified to the KDC with a unique identity, called a principal . When a user on a Kerberos-aware network logs into his workstation, his principal is sent to the KDC as part of a request for a ticket-granting ticket (or TGT) from the authentication server. This request can be sent by the login program so that it is transparent to the user or can be sent manually by a user through the kinit program after the user logs in. The KDC then checks for the principal in its database. If the principal is found, the KDC creates a TGT, encrypts it using the user's key, and sends the TGT to that user. Figure 11.1. Kerberos Authentication The login or kinit program on the client then decrypts the TGT using the user's key, which it computes from the user's password. The user's key is used only on the client machine and is not transmitted over the network. The ticket (or credentials) sent by the KDC are stored in a local store, the credential cache (ccache) , which can be checked by Kerberos-aware services. Red Hat Enterprise Linux 7 supports the following types of credential caches: The persistent KEYRING ccache type, the default cache in Red Hat Enterprise Linux 7 The System Security Services Daemon (SSSD) Kerberos Credential Manager (KCM), an alternative option since Red Hat Enterprise Linux 7.4 FILE DIR MEMORY With SSSD KCM, the Kerberos caches are not stored in a passive store, but managed by a daemon. In this setup, the Kerberos library, which is typically used by applications such as kinit , is a KCM client and the daemon is referred to as a KCM server. Having the Kerberos credential caches managed by the SSSD KCM daemon has several advantages: The daemon is stateful and can perform tasks such as Kerberos credential cache renewals or reaping old ccaches. Renewals and tracking are possible not only for tickets that SSSD itself acquired, typically via a login through pam_sss.so , but also for tickets acquired, for example, though kinit . Since the process runs in user space, it is subject to UID namespacing, unlike the Kernel KEYRING. Unlike the Kernel KEYRING-based cache, which is entirely dependent on the UID of the caller and which, in a containerized environment, is shared among all containers, the KCM server's entry point is a UNIX socket that can be bind-mounted only to selected containers. After authentication, servers can check an unencrypted list of recognized principals and their keys rather than checking kinit ; this is kept in a keytab . The TGT is set to expire after a certain period of time (usually 10 to 24 hours) and is stored in the client machine's credential cache. An expiration time is set so that a compromised TGT is of use to an attacker for only a short period of time. After the TGT has been issued, the user does not have to enter their password again until the TGT expires or until they log out and log in again. Whenever the user needs access to a network service, the client software uses the TGT to request a new ticket for that specific service from the ticket-granting server (TGS). The service ticket is then used to authenticate the user to that service transparently. 11.1.2. About Kerberos Principal Names The principal identifies not only the user or service, but also the realm that the entity belongs to. A principal name has two parts, the identifier and the realm: For a user, the identifier is only the Kerberos user name. For a service, the identifier is a combination of the service name and the host name of the machine it runs on: The service name is a case-sensitive string that is specific to the service type, like host , ldap , http , and DNS . Not all services have obvious principal identifiers; the sshd daemon, for example, uses the host service principal. The host principal is usually stored in /etc/krb5.keytab . When Kerberos requests a ticket, it always resolves the domain name aliases (DNS CNAME records) to the corresponding DNS address (A or AAAA records). The host name from the address record is then used when service or host principals are created. For example: A service attempts to connect to the host using its CNAME alias: The Kerberos server requests a ticket for the resolved host name, [email protected] , so the host principal must be host/[email protected] . 11.1.3. About the Domain-to-Realm Mapping When a client attempts to access a service running on a particular server, it knows the name of the service ( host ) and the name of the server ( foo.example.com ), but because more than one realm can be deployed on the network, it must guess at the name of the Kerberos realm in which the service resides. By default, the name of the realm is taken to be the DNS domain name of the server in all capital letters. In some configurations, this will be sufficient, but in others, the realm name which is derived will be the name of a non-existent realm. In these cases, the mapping from the server's DNS domain name to the name of its realm must be specified in the domain_realm section of the client system's /etc/krb5.conf file. For example: The configuration specifies two mappings. The first mapping specifies that any system in the example.com DNS domain belongs to the EXAMPLE.COM realm. The second specifies that a system with the exact name example.com is also in the realm. The distinction between a domain and a specific host is marked by the presence or lack of an initial period character. The mapping can also be stored directly in DNS using the "_kerberos TXT" records, for example: 11.1.4. Environmental Requirements Kerberos relies on being able to resolve machine names. Thus, it requires a working domain name service (DNS). Both DNS entries and hosts on the network must be properly configured, which is covered in the Kerberos documentation in /usr/share/doc/krb5-server- version-number . Applications that accept Kerberos authentication require time synchronization. You can set up approximate clock synchronization between the machines on the network using a service such as ntpd . For information on the ntpd service, see the documentation in /usr/share/doc/ntp- version-number /html/index.html or the ntpd (8) man page. Note Kerberos clients running Red Hat Enterprise Linux 7 support automatic time adjustment with the KDC and have no strict timing requirements. This enables better tolerance to clocking differences when deploying IdM clients with Red Hat Enterprise Linux 7. 11.1.5. Considerations for Deploying Kerberos Although Kerberos removes a common and severe security threat, it is difficult to implement for a variety of reasons: Kerberos assumes that each user is trusted but is using an untrusted host on an untrusted network. Its primary goal is to prevent unencrypted passwords from being transmitted across that network. However, if anyone other than the proper user has access to the one host that issues tickets used for authentication - the KDC - the entire Kerberos authentication system are at risk. For an application to use Kerberos, its source must be modified to make the appropriate calls into the Kerberos libraries. Applications modified in this way are considered to be Kerberos-aware . For some applications, this can be quite problematic due to the size of the application or its design. For other incompatible applications, changes must be made to the way in which the server and client communicate. Again, this can require extensive programming. Closed source applications that do not have Kerberos support by default are often the most problematic. To secure a network with Kerberos, one must either use Kerberos-aware versions of all client and server applications that transmit passwords unencrypted, or not use that client and server application at all. Migrating user passwords from a standard UNIX password database, such as /etc/passwd or /etc/shadow , to a Kerberos password database can be tedious. There is no automated mechanism to perform this task. Migration methods can vary substantially depending on the particular way Kerberos is deployed. That is why it is recommended that you use the Identity Management feature; it has specialized tools and methods for migration. Warning The Kerberos system can be compromised if a user on the network authenticates against a non-Kerberos aware service by transmitting a password in plain text. The use of non-Kerberos aware services (including telnet and FTP) is highly discouraged. Other encrypted protocols, such as SSH or SSL-secured services, are preferred to unencrypted services, but this is still not ideal. 11.1.6. Additional Resources for Kerberos Kerberos can be a complex service to implement, with a lot of flexibility in how it is deployed. Table 11.1, "External Kerberos Documentation" and Table 11.2, "Important Kerberos Man Pages" list of a few of the most important or most useful sources for more information on using Kerberos. Table 11.1. External Kerberos Documentation Documentation Location Kerberos V5 Installation Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 System Administrator's Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 UNIX User's Guide (in both PostScript and HTML) /usr/share/doc/krb5-workstation- version-number "Kerberos: The Network Authentication Protocol" web page from MIT http://web.mit.edu/kerberos/www/ Designing an Authentication System: a Dialogue in Four Scenes , originally by Bill Bryant in 1988, modified by Theodore Ts'o in 1997. This document is a conversation between two developers who are thinking through the creation of a Kerberos-style authentication system. The conversational style of the discussion makes this a good starting place for people who are completely unfamiliar with Kerberos. http://web.mit.edu/kerberos/www/dialogue.html An article for making a network Kerberos-aware. http://www.ornl.gov/~jar/HowToKerb.html Any of the manpage files can be opened by running man command_name . Table 11.2. Important Kerberos Man Pages Manpage Description Client Applications kerberos An introduction to the Kerberos system which describes how credentials work and provides recommendations for obtaining and destroying Kerberos tickets. The bottom of the man page references a number of related man pages. kinit Describes how to use this command to obtain and cache a ticket-granting ticket. kdestroy Describes how to use this command to destroy Kerberos credentials. klist Describes how to use this command to list cached Kerberos credentials. Administrative Applications kadmin Describes how to use this command to administer the Kerberos V5 database. kdb5_util Describes how to use this command to create and perform low-level administrative functions on the Kerberos V5 database. Server Applications krb5kdc Describes available command line options for the Kerberos V5 KDC. kadmind Describes available command line options for the Kerberos V5 administration server. Configuration Files krb5.conf Describes the format and options available within the configuration file for the Kerberos V5 library. kdc.conf Describes the format and options available within the configuration file for the Kerberos V5 AS and KDC. [3] A system where both the client and the server share a common key that is used to encrypt and decrypt network communication.
[ "identifier @ REALM", "service/FQDN @ REALM", "www.example.com CNAME web-01.example.com web-01.example.com A 192.0.2.145", "ssh www.example.com", "foo.example.org EXAMPLE.ORG foo.example.com EXAMPLE.COM foo.hq.example.com HQ.EXAMPLE.COM", "[domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM", "USDORIGIN example.com _kerberos TXT \"EXAMPLE.COM\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/Using_Kerberos
function::ubacktrace
function::ubacktrace Name function::ubacktrace - Hex backtrace of current user-space task stack. Synopsis Arguments None Description Return a string of hex addresses that are a backtrace of the stack of the current task. Output may be truncated as per maximum string length. Returns empty string when current probe point cannot determine user backtrace. See backtrace for kernel traceback. Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data.
[ "ubacktrace:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ubacktrace
Chapter 23. Customizing the branding of Business Central
Chapter 23. Customizing the branding of Business Central You can customize the branding of the Business Central login page and application header by replacing the images with your own. 23.1. Customizing the Business Central login page You can customize the company logo and the project logo on the Business Central login page. Procedure Start Red Hat JBoss EAP and open Business Central in a web browser. Copy an SVG format image to the EAP_HOME /standalone/deployments/business-central.war/img/ directory in your Red Hat Decision Manager installation. In the EAP_HOME /standalone/deployments/business-central.war/img/ directory, either move or rename the existing redhat_logo.png file. Rename your PNG file redhat_logo.png . To change the project logo that appears above the User name and Password fields, replace the default image BC_Logo.png with a new SVG file. Force a full reload of the login page, bypassing the cache, to view the changes. For example, in most Linux and Windows web browsers, press Ctrl+F5. 23.2. Customizing Business Central application header You can customize the Business Central application header. Procedure Start Red Hat JBoss EAP, open Business Central in a web browser, and log in with your user credentials. Copy your new application header image in the SVG format to the EAP_HOME /standalone/deployments/business-central.war/banner/ directory in your Red Hat Decision Manager installation. Open the EAP_HOME /standalone/deployments/business-central.war/banner/banner.html file in a text editor. Replace logo.png in the <img> tag with the file name of your new image:admin-and-config/ Force a full reload of the login page, bypassing the cache, to view the changes. For example, in most Linux and Windows web browsers, press Ctrl+F5.
[ "<img src=\"banner/logo.png\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/customizing_the_branding_of_business_central
Chapter 48. Installing and running KIE Server with IBM WebSphere Application Server
Chapter 48. Installing and running KIE Server with IBM WebSphere Application Server After you have configured all required system properties in IBM WebSphere Application Server, you can install KIE Server with IBM WebSphere to streamline Red Hat Process Automation Manager application management. Prerequisites An IBM WebSphere Application Server instance is configured as described in Chapter 47, Configuring IBM WebSphere Application Server for KIE Server . Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 KIE Server for All Supported EE7 Containers . Extract the rhpam-7.13.5-kie-server-ee7.zip archive to a temporary directory. In the following examples this directory is called TEMP_DIR . Repackage the kie-server.war directory: Navigate to the TEMP_DIR /rhpam-7.13.5-kie-server-ee7/kie-server.war directory. Select the contents of the TEMP_DIR /rhpam-7.13.5-kie-server-ee7/kie-server.war directory and create the kie-server.zip file. Rename kie-server.zip to kie-server.war . This is the file that you will use to deploy KIE Server. Optional: Copy the new kie-server.war file to a location that is more convenient to deploy from. In the WebSphere Integrated Solutions Console, navigate to Applications Application Types WebSphere Enterprise Applications . Click InstCll . Navigate to the kie-server.war file that you repackaged and select it to upload. Select Fast Path and click . The Install New Application wizard opens. Change the Application Name to kie-server and click . Map the KIE Server modules to servers according to your specific requirements and click . For Bind Listeners for Message-Driven Beans , select Activation Specification for both beans, enter jms/activation/KIE.SERVER.REQUEST in the Target Resource JNDI Name field, and enter the jms/cf/KIE.SERVER.REQUEST JNDI name for the KIE.SERVER.REQUEST connection factory. In the Map Virtual Hosts for Web Modules section, keep the default values and click . Set the context root to kie-server . In the Metadata for Modules section, keep the default values and click . Click Finish to install KIE Server and click Save to save the changes to the primary configuration. 48.1. Creating the KIE Server group and role After KIE Server is installed, you must create the kie-server group and a user. Prerequisites KIE Server is installed on the IBM WebSphere Application Server instance. Procedure In the WebSphere Integrated Solutions Console, click Users and Groups Manage Groups . In the Manage Groups screen, click Create . In the Create a Group screen, enter kie-server in the Group name box, then click Create . To create a user to add to the kie-server group, click Users and Groups Manage Users . In the Create a User screen, complete the required information. Click Group Membership . In the Group Membership screen, click kie-server , move it to Mapped To , and click Close . On the Create a User screen click Create . 48.2. Mapping the KIE Server group and role After KIE Server is installed, you must map the kie-server role to the kie-server group in the WebSphere Integrated Solutions Console to run KIE Server. Prerequisites KIE Server is installed on the IBM WebSphere Application Server instance. IBM WebSphere Application Server has the kie-server group with at least one user. Procedure In the WebSphere Integrated Solutions Console, navigate to Applications Application Types WebSphere Enterprise Applications and select the newly installed kie-server application. Under Detail Properties , click Security Role to User/Group Mapping . Select the kie-server role and click Map Groups to search for the kie-server group. Move the kie-server group from the Available list to the Selected list and click OK . This mapping gives users in the IBM WebSphere Application Server kie-server group access to KIE Server. Click Save to complete the mapping. 48.3. Configuring class loading for KIE Server After KIE Server is installed, you must configure class loading to set parent classes to load last. Procedure Navigate to Applications Application Types WebSphere Enterprise Applications and click kie-server . Click Class Loading and Update Detection under the Detail Properties heading on the left. In the properties, change Class Loader Order to Classes loaded with local class loader first (parent last) and WAR Class Loader Policy to Single class loader for application . Save the changes to the primary configuration. 48.4. Verifying the installation After you install KIE Server and define the KIE Server group mapping, verify that the server is running. Prerequisites KIE Server is installed on the IBM WebSphere Application Server instance. You have set all required system properties for the headless Process Automation Manager controller. You have defined the KIE Server group mapping in IBM WebSphere Application Server. Procedure To verify that the server is running, complete one of the following tasks: Navigate to the KIE Server URL http://<HOST>:<PORT>/kie-server . Send a GET request to http://<HOST>:<PORT>/kie-server/services/rest/server to check whether the KIE Server REST API responds. In these examples, replace the following placeholders: <HOST> is the ID or name of the headless Process Automation Manager controller, for example, localhost or 192.7.8.9 . <PORT> is the port number of the KIE Server host, for example, 9060 . If KIE Server is not running, stop and restart the IBM WebSphere Application Server instance and try again to access the KIE Server URL or API.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/kie-server-was-install-proc
9.4. Selecting Appropriate Authentication Methods
9.4. Selecting Appropriate Authentication Methods A basic decision regarding the security policy is how users access the directory. Are anonymous users allowed to access the directory, or is every user required to log into the directory with a user name and password (authenticate)? Directory Server provides the following methods for authentication: Section 9.4.1, "Anonymous and Unauthenticated Access" Section 9.4.2, "Simple Binds and Secure Binds" Section 9.4.3, "Certificate-Based Authentication" Section 9.4.4, "Proxy Authentication" Section 9.4.6, "Password-less Authentication" The directory uses the same authentication mechanism for all users, whether they are people or LDAP-aware applications. For information about preventing authentication by a client or group of clients, see Section 9.5, "Designing an Account Lockout Policy" . 9.4.1. Anonymous and Unauthenticated Access Anonymous access provides the easiest form of access to the directory. It makes data available to any user of the directory, regardless of whether they have authenticated. However, anonymous access does not allow administrators to track who is performing what kinds of searches, only that someone is performing searches. With anonymous access, anyone who connects to the directory can access the data. Therefore, an administrator may attempt to block a specific user or group of users from accessing some kinds of directory data, but, if anonymous access is allowed to that data, those users can still access the data simply by binding to the directory anonymously. Anonymous access can be limited. Usually directory administrators only allow anonymous access for read, search, and compare privileges (not for write, add, delete, or selfwrite). Often, administrators limit access to a subset of attributes that contain general information such as names, telephone numbers, and email addresses. Anonymous access should never be allowed for more sensitive data such as government identification numbers (for example, Social Security Numbers in the US), home telephone numbers and addresses, and salary information. Anonymous access can also be disabled entirely, if there is a need for tighter rules on who accesses the directory data. An unauthenticated bind is when a user attempts to bind with a user name but without a user password attribute. For example: The Directory Server grants anonymous access if the user does not attempt to provide a password. An unauthenticated bind does not require that the bind DN be an existing entry. As with anonymous binds, unauthenticated binds can be disabled to increase security by limiting access to the database. Disabling unauthenticated binds has another advantage: it can be used to prevent silent bind failures for clients. A poorly-written application may believe that it successfully authenticated to the directory because it received a bind success message when, in reality, it failed to pass a password and simply connected with an unauthenticated bind. 9.4.2. Simple Binds and Secure Binds If anonymous access is not allowed, users must authenticate to the directory before they can access the directory contents. With simple password authentication, a client authenticates to the server by sending a reusable password. For example, a client authenticates to the directory using a bind operation in which it provides a distinguished name and a set of credentials. The server locates the entry in the directory that corresponds to the client DN and checks whether the password given by the client matches the value stored with the entry. If it does, the server authenticates the client. If it does not, the authentication operation fails, and the client receives an error message. The bind DN often corresponds to the entry of a person. However, some directory administrators find it useful to bind as an organizational entry rather than as a person. The directory requires the entry used to bind to be of an object class that allows the userPassword attribute. This ensures that the directory recognizes the bind DN and password. Most LDAP clients hide the bind DN from the user because users may find the long strings of DN characters hard to remember. When a client attempts to hide the bind DN from the user, it uses a bind algorithm such as the following: The user enters a unique identifier, such as a user ID (for example, fchen ). The LDAP client application searches the directory for that identifier and returns the associated distinguished name (such as uid=fchen,ou=people,dc=example,dc=com ). The LDAP client application binds to the directory using the retrieved distinguished name and the password supplied by the user. Simple password authentication offers an easy way to authenticate users, but it requires extra security to be used safely. Consider restricting its use to the organization's intranet. To use with connections between business partners over an extranet or for transmissions with customers on the Internet, it may be best to require a secure (encrypted) connection. Note The drawback of simple password authentication is that the password is sent in plain text. If an unauthorized user is listening, this can compromise the security of the directory because that person can impersonate an authorized user. The nsslapd-require-secure-binds configuration attribute requires simple password authentication to occur over a secure connection, using TLS or Start TLS. This effectively encrypts the plaintext password so it cannot be sniffed by a hacker. When a secure connection is established between Directory Server and a client application using TLS or the Start TLS operation, the client performs a simple bind with an extra level of protection by not transmitting the password in plaintext. The nsslapd-require-secure-binds configuration attribute requires simple password authentication over a secure connection, meaning TLS or Start TLS. This setting allows alternative secure connections, like SASL authentication or certificate-based authentication, as well. For more information about secure connections, see Section 9.9, "Securing Server Connections" . 9.4.3. Certificate-Based Authentication An alternative form of directory authentication involves using digital certificates to bind to the directory. The directory prompts users for a password when they first access it. However, rather than matching a password stored in the directory, the password opens the user's certificate database. If the user supplies the correct password, the directory client application obtains authentication information from the certificate database. The client application and the directory then use this information to identify the user by mapping the user's certificate to a directory DN. The directory allows or denies access based on the directory DN identified during this authentication process. For more information about certificates and TLS, see the Administration Guide . 9.4.4. Proxy Authentication Proxy authentication is a special form of authentication because the user requesting access to the directory does not bind with its own DN but with a proxy DN . The proxy DN is an entity that has appropriate rights to perform the operation requested by the user. When proxy rights are granted to a person or an application, they are granted the right to specify any DN as a proxy DN, with the exception of the Directory Manager DN. One of the main advantages of proxy right is that an LDAP application can be enabled to use a single thread with a single bind to service multiple users making requests against the Directory Server. Instead of having to bind and authenticate for each user, the client application binds to the Directory Server using a proxy DN. The proxy DN is specified in the LDAP operation submitted by the client application. For example: This ldapmodify command gives the manager entry ( cn=Directory Manager ) the permissions of a user named Joe ( cn=joe ) to apply the modifications in the mods.ldif file. The manager does not need to provide Joe's password to make this change. Note The proxy mechanism is very powerful and must be used sparingly. Proxy rights are granted within the scope of the ACL, and there is no way to restrict who can be impersonated by an entry that has the proxy right. That is, when a user is granted proxy rights, that user has the ability to proxy for any user under the target; there is no way to restrict the proxy rights to only certain users. For example, if an entity has proxy rights to the dc=example,dc=com tree, that entity can do anything. Therefore, ensure that the proxy ACI is set at the lowest possible level of the DIT. For more information on this topic, check out the "Proxied Authorization ACI Example" section in the "Managing Access Control" chapter of the Administration Guide . 9.4.5. Pass-through Authentication Pass-through authentication is when any authentication request is forwarded from one server to another service. For example, whenever all of the configuration information for an instance is stored in another directory instance, the Directory Server uses pass-through authentication for the User Directory Server to connect to the Configuration Directory Server. Directory Server-to-Directory Server pass-through authentication is handled with the PTA Plug-in. Figure 9.1. Simple Pass-through Authentication Process Many systems already have authentication mechanisms in place for Unix and Linux users. One of the most common authentication frameworks is Pluggable Authentication Modules (PAM). Since many networks already have existing authentication services available, administrators may want to continue using those services. A PAM module can be configured to tell Directory Server to use an existing authentication store for LDAP clients. PAM pass-through authentication in Red Hat Directory Server uses the PAM Pass-through Authentication Plug-in, which enables the Directory Server to talk to the PAM service to authenticate LDAP clients. Figure 9.2. PAM Pass-through Authentication Process With PAM pass-through authentication, when a user attempts to bind to the Directory Server, the credentials are forwarded to the PAM service. If the credentials match the information in the PAM service, then the user can successfully bind to the Directory Server, with all of the Directory Server access control restrictions and account settings in place. Note The Directory Server can be configured to use PAM, but it cannot be used to set up PAM to use the Directory Server for authentication. For PAM to use a Directory Server instance for authentication, the pam_ldap module must be properly configured. For general configuration information about pam_ldap , look at the manpage (such as http://linux.die.net/man/5/pam_ldap ). The PAM service can be configured using system tools like the System Security Services Daemon (SSSD). SSSD can use a variety of different identity providers, including Active Directory, Red Hat Directory Server or other directories like OpenLDAP, or local system settings. To use SSSD, simply point the PAM Pass-through Authentication Plug-in to the PAM file used by SSSD, /etc/pam.d/system-auth by default. 9.4.6. Password-less Authentication An authentication attempt evaluates, first, whether the user account has the ability to authenticate. The account must be active, it must not be locked, and it must have a valid password according to any applicable password policy (meaning it cannot be expired or need to be reset). There can be times when that evaluation of whether a user should be permitted to authenticate needs to be performed, but the user should not (or cannot) be bound to the Directory Server for real. For example, a system may be using PAM to manage system accounts, and PAM is configured to use the LDAP directory as its identity store. However, the system is using password-less credentials, such as SSH keys or RSA tokens, and those credentials cannot be passed to authenticate to the Directory Server. Red Hat Directory Server supports the Account Usability Extension Control for ldapsearch es. This control returns information about the account status and any password policies in effect (like requiring a reset, a password expiration warning, or the number of grace logins left after password expiration) - all the information that would be returned in a bind attempt but without authenticating and binding to the Directory Server as that user. That allows the client to determine if the user should be allowed to authenticate based on the Directory Server settings and information, but the actual authentication process is performed outside of Directory Server. This control can be used with system-level services like PAM to allow password-less logins which still use Directory Server to store identities and even control account status. Note The Account Usability Extension Control can only be used by the Directory Manager, by default. To allow other users to use the control, set the appropriate ACI on the supported control entry, oid=1.3.6.1.4.1.42.2.27.9.5.8,cn=features,cn=config .
[ "ldapsearch -x -D \"cn=jsmith,ou=people,dc=example,dc=com\" -b \"dc=example,dc=com\" \"(cn=joe)\"", "ldapmodify -D \"cn=Directory Manager\" -W -x -D \"cn=directory manager\" -W -p 389 -h server.example.com -x -Y \"cn=joe,dc=example,dc=com\" -f mods.ldif" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_a_secure_directory-selecting_appropriate_authentication_methods
5.18. Additional Resources
5.18. Additional Resources The following sources of information provide additional resources regarding firewalld . 5.18.1. Installed Documentation firewalld(1) man page - Describes command options for firewalld . firewalld.conf(5) man page - Contains information to configure firewalld . firewall-cmd(1) man page - Describes command options for the firewalld command-line client. firewall-config(1) man page - Describes settings for the firewall-config tool. firewall-offline-cmd(1) man page - Describes command options for the firewalld offline command-line client. firewalld.icmptype(5) man page - Describes XML configuration files for ICMP filtering. firewalld.ipset(5) man page - Describes XML configuration files for the firewalld IP sets. firewalld.service(5) man page - Describes XML configuration files for firewalld service . firewalld.zone(5) man page - Describes XML configuration files for firewalld zone configuration. firewalld.direct(5) man page - Describes the firewalld direct interface configuration file. firewalld.lockdown-whitelist(5) man page - Describes the firewalld lockdown whitelist configuration file. firewalld.richlanguage(5) man page - Describes the firewalld rich language rule syntax. firewalld.zones(5) man page - General description of what zones are and how to configure them. firewalld.dbus(5) man page - Describes the D-Bus interface of firewalld . 5.18.2. Online Documentation http://www.firewalld.org/ - firewalld home page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-firewalld-additional_resources
Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network
Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network In OpenShift Container Platform 4.13, you can install a cluster on bare metal infrastructure that you provision in a restricted network. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 4.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 4.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 4.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.4.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 4.4.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.4.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.4.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 4.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.4.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.4.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 4.4.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.4.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 4.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 4.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 4.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. If you use the OpenShift SDN network plugin, specify an IPv4 network. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 4.8.2. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section from the output of the command to mirror the repository. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 4.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.8.4. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.9. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 4.10. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 4.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 4.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 4.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 4.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 4.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 4.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 4.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 4.11.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.13 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 4.11.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 4.11.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 4.11.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 4.11.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 4.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.11.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 4.11.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 4.11.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 4.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.11.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 4.11.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.11.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 4.11.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 4.12. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 4.11.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 4.13. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 4.11.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 4.11.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.13.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 4.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 4.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 4.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.15.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 4.15.2.2. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.15.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.15.2.4. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 4.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.18. steps Validating an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.13.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_bare_metal/installing-restricted-networks-bare-metal
5.2.5. /proc/crypto
5.2.5. /proc/crypto This file lists all installed cryptographic ciphers used by the Linux kernel, including additional details for each. A sample /proc/crypto file looks like the following:
[ "name : sha1 module : kernel type : digest blocksize : 64 digestsize : 20 name : md5 module : md5 type : digest blocksize : 64 digestsize : 16" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-crypto
9.8. Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later)
9.8. Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later) As of Red Hat Enterprise Linux 7.5, you can use the pcs_snmp_agent daemon to query a Pacemaker cluster for data by means of SNMP. The pcs_snmp_agent daemon is an SNMP agent that connects to the master agent ( snmpd ) by means of agentx protocol. The pcs_snmp_agent agent does not work as a standalone agent as it only provides data to the master agent. The following procedure sets up a basic configuration for a system to use SNMP with a Pacemaker cluster. You run this procedure on each node of the cluster from which you will be using SNMP to fetch data for the cluster. Install the pcs-snmp package on each node of the cluster. This will also install the net-snmp package which provides the snmp daemon. Add the following line to the /etc/snmp/snmpd.conf configuration file to set up the snmpd daemon as master agentx . Add the following line to the /etc/snmp/snmpd.conf configuration file to enable pcs_snmp_agent in the same SNMP configuration. Start the pcs_snmp_agent service. To check the configuration, display the status of the cluster with the pcs status and then try to fetch the data from SNMP to check whether it corresponds to the output. Note that when you use SNMP to fetch data, only primitive resources are provided. The following example shows the output of a pcs status command on a running cluster with one failed action.
[ "yum install pcs-snmp", "master agentx", "view systemview included .1.3.6.1.4.1.32723.100", "systemctl start pcs_snmp_agent.service systemctl enable pcs_snmp_agent.service", "pcs status Cluster name: rhel75-cluster Stack: corosync Current DC: rhel75-node2 (version 1.1.18-5.el7-1a4ef7d180) - partition with quorum Last updated: Wed Nov 15 16:07:44 2017 Last change: Wed Nov 15 16:06:40 2017 by hacluster via cibadmin on rhel75-node1 2 nodes configured 14 resources configured (1 DISABLED) Online: [ rhel75-node1 rhel75-node2 ] Full list of resources: fencing (stonith:fence_xvm): Started rhel75-node1 dummy5 (ocf::pacemaker:Dummy): Stopped (disabled) dummy6 (ocf::pacemaker:Dummy): Stopped dummy7 (ocf::pacemaker:Dummy): Started rhel75-node2 dummy8 (ocf::pacemaker:Dummy): Started rhel75-node1 dummy9 (ocf::pacemaker:Dummy): Started rhel75-node2 Resource Group: group1 dummy1 (ocf::pacemaker:Dummy): Started rhel75-node1 dummy10 (ocf::pacemaker:Dummy): Started rhel75-node1 Clone Set: group2-clone [group2] Started: [ rhel75-node1 rhel75-node2 ] Clone Set: dummy4-clone [dummy4] Started: [ rhel75-node1 rhel75-node2 ] Failed Actions: * dummy6_start_0 on rhel75-node1 'unknown error' (1): call=87, status=complete, exitreason='', last-rc-change='Wed Nov 15 16:05:55 2017', queued=0ms, exec=20ms", "snmpwalk -v 2c -c public localhost PACEMAKER-PCS-V1-MIB::pcmkPcsV1Cluster PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterName.0 = STRING: \"rhel75-cluster\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterQuorate.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOfflineNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesStandbyNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOfflineNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesNum.0 = INTEGER: 11 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.0 = STRING: \"fencing\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.1 = STRING: \"dummy5\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.2 = STRING: \"dummy6\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.3 = STRING: \"dummy7\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.4 = STRING: \"dummy8\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.5 = STRING: \"dummy9\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.6 = STRING: \"dummy1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.7 = STRING: \"dummy10\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.8 = STRING: \"dummy2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.9 = STRING: \"dummy3\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.10 = STRING: \"dummy4\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesNum.0 = INTEGER: 9 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.0 = STRING: \"fencing\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.1 = STRING: \"dummy7\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.2 = STRING: \"dummy8\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.3 = STRING: \"dummy9\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.4 = STRING: \"dummy1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.5 = STRING: \"dummy10\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.6 = STRING: \"dummy2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.7 = STRING: \"dummy3\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.8 = STRING: \"dummy4\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterStoppedResroucesNum.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterStoppedResroucesIds.0 = STRING: \"dummy5\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesNum.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesIds.0 = STRING: \"dummy6\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesIds.0 = No more variables left in this MIB View (It is past the end of the MIB tree)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-snmpandpacemaker-HAAR
5.3. Viewing the Current Status and Settings of firewalld
5.3. Viewing the Current Status and Settings of firewalld 5.3.1. Viewing the Current Status of firewalld The firewall service, firewalld , is installed on the system by default. Use the firewalld CLI interface to check that the service is running. To see the status of the service: For more information about the service status, use the systemctl status sub-command: Furthermore, it is important to know how firewalld is set up and which rules are in force before you try to edit the settings. To display the firewall settings, see Section 5.3.2, "Viewing Current firewalld Settings" 5.3.2. Viewing Current firewalld Settings 5.3.2.1. Viewing Allowed Services using GUI To view the list of services using the graphical firewall-config tool, press the Super key to enter the Activities Overview, type firewall , and press Enter . The firewall-config tool appears. You can now view the list of services under the Services tab. Alternatively, to start the graphical firewall configuration tool using the command-line, enter the following command: The Firewall Configuration window opens. Note that this command can be run as a normal user, but you are prompted for an administrator password occasionally. Figure 5.2. The Services tab in firewall-config 5.3.2.2. Viewing firewalld Settings using CLI With the CLI client, it is possible to get different views of the current firewall settings. The --list-all option shows a complete overview of the firewalld settings. firewalld uses zones to manage the traffic. If a zone is not specified by the --zone option, the command is effective in the default zone assigned to the active network interface and connection. To list all the relevant information for the default zone: Note To specify the zone for which to display the settings, add the --zone= zone-name argument to the firewall-cmd --list-all command, for example: To see the settings for particular information, such as services or ports, use a specific option. See the firewalld manual pages or get a list of the options using the command help: For example, to see which services are allowed in the current zone: Listing the settings for a certain subpart using the CLI tool can sometimes be difficult to interpret. For example, you allow the SSH service and firewalld opens the necessary port (22) for the service. Later, if you list the allowed services, the list shows the SSH service, but if you list open ports, it does not show any. Therefore, it is recommended to use the --list-all option to make sure you receive a complete information.
[ "~]# firewall-cmd --state", "~]# systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor pr Active: active (running) since Mon 2017-12-18 16:05:15 CET; 50min ago Docs: man:firewalld(1) Main PID: 705 (firewalld) Tasks: 2 (limit: 4915) CGroup: /system.slice/firewalld.service └─705 /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopid", "~]USD firewall-config", "~]# firewall-cmd --list-all public target: default icmp-block-inversion: no interfaces: sources: services: ssh dhcpv6-client ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:", "~]# firewall-cmd --list-all --zone=home home target: default icmp-block-inversion: no interfaces: sources: services: ssh mdns samba-client dhcpv6-client ... [output truncated]", "~]# firewall-cmd --help Usage: firewall-cmd [OPTIONS...] General Options -h, --help Prints a short help text and exists -V, --version Print the version string of firewalld -q, --quiet Do not print status messages Status Options --state Return and print firewalld state --reload Reload firewall and keep state information ... [output truncated]", "~]# firewall-cmd --list-services ssh dhcpv6-client" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Viewing_Current_Status_and_Settings_of_firewalld
Chapter 2. Learn more about OpenShift Container Platform
Chapter 2. Learn more about OpenShift Container Platform Use the following sections to find content to help you learn about and use OpenShift Container Platform. 2.1. Architect Learn about OpenShift Container Platform Plan an OpenShift Container Platform deployment Additional resources Enterprise Kubernetes with OpenShift Tested platforms OpenShift blog Architecture Security and compliance What's new in OpenShift Container Platform Networking OpenShift Container Platform life cycle Backup and restore 2.2. Cluster Administrator Learn about OpenShift Container Platform Deploy OpenShift Container Platform Manage OpenShift Container Platform Additional resources Enterprise Kubernetes with OpenShift Installing OpenShift Container Platform Using Insights to identify issues with your cluster Getting Support Architecture Post installation configuration Logging OpenShift Knowledgebase articles OpenShift Interactive Learning Portal Networking Monitoring overview OpenShift Container Platform Life Cycle Storage Backup and restore Updating a cluster 2.3. Application Site Reliability Engineer (App SRE) Learn about OpenShift Container Platform Deploy and manage applications Additional resources OpenShift Interactive Learning Portal Projects Getting Support Architecture Operators OpenShift Knowledgebase articles Logging OpenShift Container Platform Life Cycle Blogs about logging Monitoring 2.4. Developer Learn about application development in OpenShift Container Platform Deploy applications Getting started with OpenShift for developers (interactive tutorial) Creating applications Red Hat Developers site Builds Red Hat OpenShift Dev Spaces (formerly Red Hat CodeReady Workspaces) Operators Images Developer-focused CLI
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/about/learn_more_about_openshift
Chapter 35. Log Files
Chapter 35. Log Files Log files are files that contain messages about the system, including the kernel, services, and applications running on it. There are different log files for different information. For example, there is a default system log file, a log file just for security messages, and a log file for cron tasks. Log files can be very useful when trying to troubleshoot a problem with the system such as trying to load a kernel driver or when looking for unauthorized log in attempts to the system. This chapter discusses where to find log files, how to view log files, and what to look for in log files. Some log files are controlled by a daemon called syslogd . A list of log messages maintained by syslogd can be found in the /etc/syslog.conf configuration file. 35.1. Locating Log Files Most log files are located in the /var/log/ directory. Some applications such as httpd and samba have a directory within /var/log/ for their log files. You may notice multiple files in the log file directory with numbers after them. These are created when the log files are rotated. Log files are rotated so their file sizes do not become too large. The logrotate package contains a cron task that automatically rotates log files according to the /etc/logrotate.conf configuration file and the configuration files in the /etc/logrotate.d/ directory. By default, it is configured to rotate every week and keep four weeks worth of log files.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Log_Files
Config APIs
Config APIs OpenShift Container Platform 4.18 Reference guide for config APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/config_apis/index
Chapter 85. versions
Chapter 85. versions This chapter describes the commands under the versions command. 85.1. versions show Show available versions of services Usage: Table 85.1. Command arguments Value Summary -h, --help Show this help message and exit --all-interfaces Show values for all interfaces --interface <interface> Show versions for a specific interface. --region-name <region_name> Show versions for a specific region. --service <region_name> Show versions for a specific service. --status <region_name> Show versions for a specific status. [valid values are SUPPORTED, CURRENT, DEPRECATED, EXPERIMENTAL] Table 85.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 85.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 85.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 85.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack versions show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-interfaces | --interface <interface>] [--region-name <region_name>] [--service <region_name>] [--status <region_name>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/versions
37.3. Updating the Size of Your Multipath Device
37.3. Updating the Size of Your Multipath Device If multipathing is enabled on your system, you will also need to reflect the change in logical unit size to the logical unit's corresponding multipath device ( after resizing the logical unit). For Red Hat Enterprise Linux 5.3 (or later), you can do this through multipathd . To do so, first ensure that multipathd is running using service multipathd status . Once you've verified that multipathd is operational, run the following command: The multipath_device variable is the corresponding multipath entry of your device in /dev/mapper . Depending on how multipathing is set up on your system, multipath_device can be either of two formats: mpath X , where X is the corresponding entry of your device (for example, mpath0 ) a WWID; for example, 3600508b400105e210000900000490000 To determine which multipath entry corresponds to your resized logical unit, run multipath -ll . This displays a list of all existing multipath entries in the system, along with the major and minor numbers of their corresponding devices. Important Do not use multipathd -k"resize map multipath_device " if there are any commands queued to multipath_device . That is, do not use this command when the no_path_retry parameter (in /etc/multipath.conf ) is set to "queue" , and there are no active paths to the device. If your system is using Red Hat Enterprise Linux 5.0-5.2, you will need to perform the following procedure to instruct the multipathd daemon to recognize (and adjust to) the changes you made to the resized logical unit: Procedure 37.1. Resizing the Corresponding Multipath Device (Required for Red Hat Enterprise Linux 5.0 - 5.2) Dump the device mapper table for the multipathed device using: dmsetup table multipath_device Save the dumped device mapper table as table_name . This table will be re-loaded and edited later. Examine the device mapper table. Note that the first two numbers in each line correspond to the start and end sectors of the disk, respectively. Suspend the device mapper target: dmsetup suspend multipath_device Open the device mapper table you saved earlier (i.e. table_name ). Change the second number (i.e. the disk end sector) to reflect the new number of 512 byte sectors in the disk. For example, if the new disk size is 2GB, change the second number to 4194304. Reload the modified device mapper table: dmsetup reload multipath_device table_name Resume the device mapper target: dmsetup resume multipath_device For more information about multipathing, refer to the Red Hat Enterprise Linux 6 DM Multipath guide.
[ "multipathd -k\"resize map multipath_device \"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch37s03
Chapter 1. Introduction
Chapter 1. Introduction This document clarifies some important information related to features and support for Red Hat JBoss Data Grid, such as: The two JBoss Data Grid Usage Modes Supported JBoss Data Grid features JBoss Data Grid features that are limited to a technology preview. Report a bug 1.1. About Usage Modes Red Hat JBoss Data Grid offers two usage modes: Remote Client-Server mode Library mode Remote Client-Server mode, which provides a managed, distributed and clusterable data grid server. Applications can remotely access the data grid server using Hot Rod , memcached or REST client APIs. Library mode allows the user to build and deploy a custom runtime environment. The Library usage mode hosts a single data grid node in the applications process, with remote access to nodes hosted in other JVMs. Tested containers for JBoss Data Grid Library mode includes JBoss Enterprise Web Server and JBoss Enterprise Application Platform (see https://access.redhat.com/articles/115883 for details about supported containers). Additionally, Library mode is supported outside the listed containers as a standalone application. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/feature_support_document/chap-introduction
Nodes
Nodes OpenShift Container Platform 4.14 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/nodes/index
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.25/proc-providing-feedback-on-redhat-documentation
1.3. Configuring the DHCP Client Behavior
1.3. Configuring the DHCP Client Behavior A Dynamic Host Configuration Protocol (DHCP) client requests the dynamic IP address and corresponding configuration information from a DHCP server each time a client connects to the network. Note that NetworkManager calls the DHCP client, dhclient by default. Requesting an IP Address When a DHCP connection is started, a dhcp client requests an IP address from a DHCP server. The time that a dhcp client waits for this request to be completed is 60 seconds by default. You can configure the ipv4.dhcp-timeout property using the nmcli tool or the IPV4_DHCP_TIMEOUT option in the /etc/sysconfig/network-scripts/ifcfg- ifname file. For example, using nmcli : If an address cannot be obtained during this interval, the IPv4 configuration fails. The whole connection may fail, too, and this depends on the ipv4.may-fail property: If ipv4.may-fail is set to yes (default), the state of the connection depends on IPv6 configuration: If the IPv6 configuration is enabled and successful, the connection is activated, but the IPv4 configuration can never be retried again. If the IPv6 configuration is disabled or does not get configured, the connection fails. If ipv4.may-fail is set to no the connection is deactivated. In this case: If the autoconnect property of the connection is enabled, NetworkManager retries to activate the connection as many times as set in the autoconnect-retries property. The default is 4. If the connection still cannot acquire the dhcp address, auto-activation fails. Note that after 5 minutes, the auto-connection process starts again and the dhcp client retries to acquire an address from the dhcp server. Requesting a Lease Renewal When a dhcp address is acquired and the IP address lease cannot be renewed, the dhcp client is restarted for three times every 2 minutes to try to get a lease from the dhcp server. Each time, it is configured by setting the ipv4.dhcp-timeout property in seconds (default is 60) to get the lease. If you get a reply during your attempts, the process stops and you get your lease renewed. After three attempts failed: If ipv4.may-fail is set to yes (default) and IPv6 is successfully configured, the connection is activated and the dhcp client is restarted again every 2 minutes. If ipv4.may-fail is set to no , the connection is deactivated. In this case, if the connection has the autoconnect property enabled, the connection is activated from scratch. 1.3.1. Making DHCPv4 Persistent To make DHCPv4 persistent both at startup and during the lease renewal processes, set the ipv4.dhcp-timeout property either to the maximum for a 32-bit integer (MAXINT32), which is 2147483647 , or to the infinity value: As a result, NetworkManager never stops trying to get or renew a lease from a DHCP server until it is successful. To ensure a DHCP persistent behavior only during the lease renewal process, you can manually add a static IP to the IPADDR property in the /etc/sysconfig/network-scripts/ifcfg- enp1s0 configuration file or by using nmcli : When an IP address lease expires, the static IP preserves the IP state as configured or partially configured (you can have an IP address, but you are not connected to the Internet), making sure that the dhcp client is restarted every 2 minutes.
[ "~]# nmcli connection modify enp1s0 ipv4.dhcp-timeout 10", "~]USD nmcli connection modify enps1s0 ipv4.dhcp-timeout infinity", "~]USD nmcli connection modify enp1s0 ipv4.address 192.168.122.88/24" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/configuring_the_dhcp_client_behavior
5.145. liberation-fonts
5.145. liberation-fonts 5.145.1. RHBA-2012:0384 - liberation-fonts bug fix update Updated liberation-fonts packages that fix one bug are now available for Red Hat Enterprise Linux 6. The liberation-fonts packages provide fonts intended to replace the three most commonly used fonts on Microsoft systems: Times New Roman, Arial, and Courier New. Bug Fix BZ# 772165 Previously, the "fonts.dir" file provided with the liberation-fonts packages was empty. As a consequence, legacy applications were not able to make use of liberation-fonts even when the package was installed. This was because the "mkfontscale" command was run after the "mkfontdir" command. The order of running the commands has been changed and legacy applications can use liberation-fonts as expected. All users of liberation-fonts are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/liberation-fonts
16.14. Where to Find Further Documentation
16.14. Where to Find Further Documentation The primary source for documentation for libguestfs and the tools are the Unix man pages. The API is documented in guestfs(3). guestfish is documented in guestfish(1). The virt tools are documented in their own man pages (eg. virt-df(1)).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-libguestfs-more-docs
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/proc_providing-feedback-on-red-hat-documentation
7.213. tcpdump
7.213. tcpdump 7.213.1. RHBA-2015:1294 - tcpdump bug fix and enhancement update Updated tcpdump packages that fix two bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The tcpdump packages contain a command-line tool for monitoring network traffic. Tcpdump can capture and display the packet headers on a particular network interface or on all interfaces. Tcpdump can display all of the packet headers, or just the ones that match particular criteria. Bug Fixes BZ# 972396 Previously, the tcpdump utility was treating the argument for the "-i" option as a number if it contained a numeric prefix and other characters, not as a string. Consequently, packet capturing was not started on a specified interface at all or could get started on a incorrect interface. With this update, the argument for "-i" is treated as a number only if it contains only numerals 0-9; otherwise, the argument is treated as a string. For example, interface names such as "192_1_2" are no longer treated as interface number 192, but as a string. As a result, tcpdump starts correctly on a specified interface even if the interface name contains a numeric prefix. BZ# 1130111 The tcpdump Cisco Discovery Protocol (CDP) dissector previously stopped parsing packet prematurely after encountering Type-Length-Value (TLV) field which had the length of 0 and no data associated with it. Consequently, some CDP packets were not completely dissected. A patch which alters code deciding when to stop parsing the packet has been applied to fix this bug. Now, zero length data TLVs are allowed, and CDP packets containing such TLVs are parsed correctly. Enhancements BZ# 1045601 The kernel, glibc, and libpcap utilities now provide APIs to obtain nanosecond resolutions timestamps. The user can thus query which timestamp sources are available ("-J"), set a specific timestamp source ("-j"), and request timestamps with a specified resolution ("--time-stamp-precision"). BZ# 1099701 This update adds the new "-P" command-line argument for capturing packets in certain direction, which can ease debugging networking-related problems. Users of tcpdump are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-tcpdump
4.10. Detaching the Disks for the Image Creation Process
4.10. Detaching the Disks for the Image Creation Process Now that the disks have been set up, they must be detached from the VM instances so that they can be used for the step. The VM instance is deleted in the process. In the Google Developers Console, click Compute > Compute Engine > VM instances > rhgs-primary-n01 . Scroll down to the Disks section (there should be one for the boot disk and one for the additional disks). Ensure that the checkbox delete boot disk when instance is deleted is unchecked and the option for When deleting instance is selected the additional disk shows Keep disk . Now, click Delete on the top to delete the VM instance.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-documentation-deployment_guide_for_public_cloud-google_cloud_platform-detaching_disks
1.4.2. Direct Routing
1.4.2. Direct Routing Building a Load Balancer Add-On setup that uses direct routing provides increased performance benefits compared to other Load Balancer Add-On networking topologies. Direct routing allows the real servers to process and route packets directly to a requesting user rather than passing all outgoing packets through the LVS router. Direct routing reduces the possibility of network performance issues by relegating the job of the LVS router to processing incoming packets only. Figure 1.4. Load Balancer Add-On Implemented with Direct Routing In the typical direct routing Load Balancer Add-On setup, the LVS router receives incoming server requests through the virtual IP (VIP) and uses a scheduling algorithm to route the request to the real servers. The real server processes the request and sends the response directly to the client, bypassing the LVS router. This method of routing allows for scalability in that real servers can be added without the added burden on the LVS router to route outgoing packets from the real server to the client, which can become a bottleneck under heavy network load. 1.4.2.1. Direct Routing and the ARP Limitation While there are many advantages to using direct routing in Load Balancer Add-On, there are limitations as well. The most common issue with Load Balancer Add-On by means of direct routing is with Address Resolution Protocol ( ARP ). In typical situations, a client on the Internet sends a request to an IP address. Network routers typically send requests to their destination by relating IP addresses to a machine's MAC address with ARP. ARP requests are broadcast to all connected machines on a network, and the machine with the correct IP/MAC address combination receives the packet. The IP/MAC associations are stored in an ARP cache, which is cleared periodically (usually every 15 minutes) and refilled with IP/MAC associations. The issue with ARP requests in a direct routing Load Balancer Add-On setup is that because a client request to an IP address must be associated with a MAC address for the request to be handled, the virtual IP address of the Load Balancer Add-On system must also be associated to a MAC as well. However, since both the LVS router and the real servers all have the same VIP, the ARP request will be broadcast to all the machines associated with the VIP. This can cause several problems, such as the VIP being associated directly to one of the real servers and processing requests directly, bypassing the LVS router completely and defeating the purpose of the Load Balancer Add-On setup. To solve this issue, ensure that the incoming requests are always sent to the LVS router rather than one of the real servers. This can be done by using either the arptables_jf or the iptables packet filtering tool for the following reasons: The arptables_jf prevents ARP from associating VIPs with real servers. The iptables method completely sidesteps the ARP problem by not configuring VIPs on real servers in the first place. For more information on using arptables or iptables in a direct routing Load Balancer Add-On environment, see Section 3.2.1, "Direct Routing and arptables_jf " or Section 3.2.2, "Direct Routing and iptables " .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s2-lvs-directrouting-VSA
Chapter 3. Managing LVM volume groups
Chapter 3. Managing LVM volume groups You can create and use volume groups (VGs) to manage and resize multiple physical volumes (PVs) combined into a single storage entity. Extents are the smallest units of space that you can allocate in LVM. Physical extents (PE) and logical extents (LE) has the default size of 4 MiB that you can configure. All extents have the same size. When you create a logical volume (LV) within a VG, LVM allocates physical extents on the PVs. The logical extents within the LV correspond one-to-one with physical extents in the VG. You do not need to specify the PEs to create LVs. LVM will locate the available PEs and piece them together to create a LV of the requested size. Within a VG, you can create multiple LVs, each acting like a traditional partition but with the ability to span across physical volumes and resize dynamically. VGs can manage the allocation of disk space automatically. 3.1. Creating an LVM volume group You can use the vgcreate command to create a volume group (VG). You can adjust the extent size for very large or very small volumes to optimize performance and storage efficiency. You can specify the extent size when creating a VG. To change the extent size you must re-create the volume group. Prerequisites Administrative access. The lvm2 package is installed. One or more physical volumes are created. For more information about creating physical volumes, see Creating LVM physical volume . Procedure List and identify the PV that you want to include in the VG: Create a VG: Replace VolumeGroupName with the name of the volume group that you want to create. Replace PhysicalVolumeName with the name of the PV. To specify the extent size when creating a VG, use the -s ExtentSize option. Replace ExtentSize with the size of the extent. If you provide no size suffix, the command defaults to MB. Verification Verify that the VG is created: Additional resources vgcreate(8) , vgs(8) , and pvs(8) man pages on your system 3.2. Creating volume groups in the web console Create volume groups from one or more physical drives or other storage devices. Logical volumes are created from volume groups. Each volume group can include multiple logical volumes. Prerequisites You have installed the RHEL 9 web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. Physical drives or other types of storage devices from which you want to create volume groups. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button. From the drop-down menu, select Create LVM2 volume group . In the Name field, enter a name for the volume group. The name must not include spaces. Select the drives you want to combine to create the volume group. The RHEL web console displays only unused block devices. If you do not see your device in the list, make sure that it is not being used by your system, or format it to be empty and unused. Used devices include, for example: Devices formatted with a file system Physical volumes in another volume group Physical volumes being a member of another software RAID device Click Create . The volume group is created. Verification On the Storage page, check whether the new volume group is listed in the Storage table. 3.3. Renaming an LVM volume group You can use the vgrename command to rename a volume group (VG). Prerequisites Administrative access. The lvm2 package is installed. One or more physical volumes are created. For more information about creating physical volumes, see Creating LVM physical volume . The volume group is created. For more information about creating volume groups, see Section 3.1, "Creating an LVM volume group" . Procedure List and identify the VG that you want to rename: Rename the VG: Replace OldVolumeGroupName with the name of the VG. Replace NewVolumeGroupName with the new name for the VG. Verification Verify that the VG has a new name: Additional resources vgrename(8) , vgs(8) man pages 3.4. Extending an LVM volume group You can use the vgextend command to add physical volumes (PVs) to a volume group (VG). Prerequisites Administrative access. The lvm2 package is installed. One or more physical volumes are created. For more information about creating physical volumes, see Creating LVM physical volume . The volume group is created. For more information about creating volume groups, see Section 3.1, "Creating an LVM volume group" . Procedure List and identify the VG that you want to extend: List and identify the PVs that you want to add to the VG: Extend the VG: Replace VolumeGroupName with the name of the VG. Replace PhysicalVolumeName with the name of the PV. Verification Verify that the VG now includes the new PV: Additional resources vgextend(8) , vgs(8) , pvs(8) man pages 3.5. Combining LVM volume groups You can combine two existing volume groups (VGs) with the vgmerge command. The source volume will be merged into the destination volume. Prerequisites Administrative access. The lvm2 package is installed. One or more physical volumes are created. For more information about creating physical volumes, see Creating LVM physical volume . Two or more volume group are created. For more information about creating volume groups, see Section 3.1, "Creating an LVM volume group" . Procedure List and identify the VG that you want to merge: Merge the source VG into the destination VG: Replace VolumeGroupName2 with the name of the source VG. Replace VolumeGroupName1 with the name of the destination VG. Verification Verify that the VG now includes the new PV: Additional resources vgmerge(8) man page on your system 3.6. Removing physical volumes from a volume group To remove unused physical volumes (PVs) from a volume group (VG), use the vgreduce command. The vgreduce command shrinks a volume group's capacity by removing one or more empty physical volumes. This frees those physical volumes to be used in different volume groups or to be removed from the system. Procedure If the physical volume is still being used, migrate the data to another physical volume from the same volume group: If there are not enough free extents on the other physical volumes in the existing volume group: Create a new physical volume from /dev/vdb4 : Add the newly created physical volume to the volume group: Move the data from /dev/vdb3 to /dev/vdb4 : Remove the physical volume /dev/vdb3 from the volume group: Verification Verify that the /dev/vdb3 physical volume is removed from the VolumeGroupName volume group: Additional resources vgreduce(8) , pvmove(8) , and pvs(8) man pages on your system 3.7. Splitting a LVM volume group If there is enough unused space on the physical volumes, a new volume group can be created without adding new disks. In the initial setup, the volume group VolumeGroupName1 consists of /dev/vdb1 , /dev/vdb2 , and /dev/vdb3 . After completing this procedure, the volume group VolumeGroupName1 will consist of /dev/vdb1 and /dev/vdb2 , and the second volume group, VolumeGroupName2 , will consist of /dev/vdb3 . Prerequisites You have sufficient space in the volume group. Use the vgscan command to determine how much free space is currently available in the volume group. Depending on the free capacity in the existing physical volume, move all the used physical extents to other physical volume using the pvmove command. For more information, see Removing physical volumes from a volume group . Procedure Split the existing volume group VolumeGroupName1 to the new volume group VolumeGroupName2 : Note If you have created a logical volume using the existing volume group, use the following command to deactivate the logical volume: View the attributes of the two volume groups: Verification Verify that the newly created volume group VolumeGroupName2 consists of /dev/vdb3 physical volume: Additional resources vgsplit(8) , vgs(8) , and pvs(8) man pages on your system 3.8. Moving a volume group to another system You can move an entire LVM volume group (VG) to another system using the following commands: vgexport Use this command on an existing system to make an inactive VG inaccessible to the system. Once the VG is inaccessible, you can detach its physical volumes (PV). vgimport Use this command on the other system to make the VG, which was inactive in the old system, accessible in the new system. Prerequisites No users are accessing files on the active volumes in the volume group that you are moving. Procedure Unmount the LogicalVolumeName logical volume: Deactivate all logical volumes in the volume group, which prevents any further activity on the volume group: Export the volume group to prevent it from being accessed by the system from which you are removing it: View the exported volume group: Shut down your system and unplug the disks that make up the volume group and connect them to the new system. Plug the disks into the new system and import the volume group to make it accessible to the new system: Note You can use the --force argument of the vgimport command to import volume groups that are missing physical volumes and subsequently run the vgreduce --removemissing command. Activate the volume group: Mount the file system to make it available for use: Additional resources vgimport(8) , vgexport(8) , and vgchange(8) man pages on your system 3.9. Removing LVM volume groups You can remove an existing volume group using the vgremove command. Only volume groups that do not contain logical volumes can be removed. Prerequisites Administrative access. Procedure Ensure the volume group does not contain logical volumes: Replace VolumeGroupName with the name of the volume group. Remove the volume group: Replace VolumeGroupName with the name of the volume group. Additional resources vgs(8) , vgremove(8) man pages on your system 3.10. Removing LVM volume groups in a cluster environment In a cluster environment, LVM uses the lockspace <qualifier> to coordinate access to volume groups shared among multiple machines. You must stop the lockspace before removing a volume group to make sure no other node is trying to access or modify it during the removal process. Prerequisites Administrative access. The volume group contains no logical volumes. Procedure Ensure the volume group does not contain logical volumes: Replace VolumeGroupName with the name of the volume group. Stop the lockspace on all nodes except the node where you are removing the volume group: Replace VolumeGroupName with the name of the volume group and wait for the lock to stop. Remove the volume group: Replace VolumeGroupName with the name of the volume group. Additional resources vgremove(8) , vgchange(8) man page
[ "pvs", "vgcreate VolumeGroupName PhysicalVolumeName1 PhysicalVolumeName2", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "vgs", "vgrename OldVolumeGroupName NewVolumeGroupName", "vgs VG #PV #LV #SN Attr VSize VFree NewVolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "vgs", "pvs", "vgextend VolumeGroupName PhysicalVolumeName", "pvs PV VG Fmt Attr PSize PFree /dev/sda VolumeGroupName lvm2 a-- 28.87g 28.87g /dev/sdd VolumeGroupName lvm2 a-- 1.88g 1.88g", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 1 0 0 wz--n- 28.87g 28.87g VolumeGroupName2 1 0 0 wz--n- 1.88g 1.88g", "vgmerge VolumeGroupName2 VolumeGroupName1", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 2 0 0 wz--n- 30.75g 30.75g", "pvmove /dev/vdb3 /dev/vdb3 : Moved: 2.0% /dev/vdb3 : Moved: 79.2% /dev/vdb3 : Moved: 100.0%", "pvcreate /dev/vdb4 Physical volume \" /dev/vdb4 \" successfully created", "vgextend VolumeGroupName /dev/vdb4 Volume group \" VolumeGroupName \" successfully extended", "pvmove /dev/vdb3 /dev/vdb4 /dev/vdb3 : Moved: 33.33% /dev/vdb3 : Moved: 100.00%", "vgreduce VolumeGroupName /dev/vdb3 Removed \" /dev/vdb3 \" from volume group \" VolumeGroupName \"", "pvs PV VG Fmt Attr PSize PFree Used /dev/vdb1 VolumeGroupName lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 VolumeGroupName lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 lvm2 a-- 1020.00m 1008.00m 12.00m", "vgsplit VolumeGroupName1 VolumeGroupName2 /dev/vdb3 Volume group \" VolumeGroupName2 \" successfully split from \" VolumeGroupName1 \"", "lvchange -a n /dev/VolumeGroupName1/LogicalVolumeName", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 2 1 0 wz--n- 34.30G 10.80G VolumeGroupName2 1 0 0 wz--n- 17.15G 17.15G", "pvs PV VG Fmt Attr PSize PFree Used /dev/vdb1 VolumeGroupName1 lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 VolumeGroupName1 lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 VolumeGroupName2 lvm2 a-- 1020.00m 1008.00m 12.00m", "umount /dev/mnt/ LogicalVolumeName", "vgchange -an VolumeGroupName vgchange -- volume group \"VolumeGroupName\" successfully deactivated", "vgexport VolumeGroupName vgexport -- volume group \"VolumeGroupName\" successfully exported", "pvscan PV /dev/sda1 is in exported VG VolumeGroupName [17.15 GB / 7.15 GB free] PV /dev/sdc1 is in exported VG VolumeGroupName [17.15 GB / 15.15 GB free] PV /dev/sdd1 is in exported VG VolumeGroupName [17.15 GB / 15.15 GB free]", "vgimport VolumeGroupName", "vgchange -ay VolumeGroupName", "mkdir -p /mnt/ VolumeGroupName /users mount /dev/ VolumeGroupName /users /mnt/ VolumeGroupName /users", "vgs -o vg_name,lv_count VolumeGroupName VG #LV VolumeGroupName 0", "vgremove VolumeGroupName", "vgs -o vg_name,lv_count VolumeGroupName VG #LV VolumeGroupName 0", "vgchange --lockstop VolumeGroupName", "vgremove VolumeGroupName" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/managing-lvm-volume-groups_configuring-and-managing-logical-volumes
function::str_replace
function::str_replace Name function::str_replace - str_replace Replaces all instances of a substring with another Synopsis Arguments prnt_str the string to search and replace in srch_str the substring which is used to search in prnt_str string rplc_str the substring which is used to replace srch_str Description This function returns the given string with substrings replaced.
[ "str_replace:string(prnt_str:string,srch_str:string,rplc_str:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-str-replace
Chapter 1. Service Mesh 2.x
Chapter 1. Service Mesh 2.x 1.1. About OpenShift Service Mesh Note Because Red Hat OpenShift Service Mesh releases on a different cadence from OpenShift Container Platform and because the Red Hat OpenShift Service Mesh Operator supports deploying multiple versions of the ServiceMeshControlPlane , the Service Mesh documentation does not maintain separate documentation sets for minor versions of the product. The current documentation set applies to all currently supported versions of Service Mesh unless version-specific limitations are called out in a particular topic or for a particular feature. For additional information about the Red Hat OpenShift Service Mesh life cycle and supported platforms, refer to the Platform Life Cycle Policy . 1.1.1. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. Service Mesh, which is based on the open source Istio project , provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. 1.1.2. Core features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. 1.2. Service Mesh Release Notes 1.2.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.2.2. New features and enhancements This release adds improvements related to the following components and concepts. 1.2.2.1. New features Red Hat OpenShift Service Mesh version 2.3.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.1.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.2 Component Version Istio 1.14.5 Envoy Proxy 1.22.7 Jaeger 1.39 Kiali 1.57.6 1.2.2.2. New features Red Hat OpenShift Service Mesh version 2.3.1 This release of Red Hat OpenShift Service Mesh introduces new features, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.2.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3.1 Component Version Istio 1.14.5 Envoy Proxy 1.22.4 Jaeger 1.39 Kiali 1.57.5 1.2.2.3. New features Red Hat OpenShift Service Mesh version 2.3 This release of Red Hat OpenShift Service Mesh introduces new features, addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9, 4.10, and 4.11. 1.2.2.3.1. Component versions included in Red Hat OpenShift Service Mesh version 2.3 Component Version Istio 1.14.3 Envoy Proxy 1.22.4 Jaeger 1.38 Kiali 1.57.3 1.2.2.3.2. New Container Network Interface (CNI) DaemonSet container and ConfigMap The openshift-operators namespace includes a new istio CNI DaemonSet istio-cni-node-v2-3 and a new ConfigMap resource, istio-cni-config-v2-3 . When upgrading to Service Mesh Control Plane 2.3, the existing istio-cni-node DaemonSet is not changed, and a new istio-cni-node-v2-3 DaemonSet is created. This name change does not affect releases or any istio-cni-node CNI DaemonSet associated with a Service Mesh Control Plane deployed using a release. 1.2.2.3.3. Gateway injection support This release introduces generally available support for Gateway injection. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than the sidecar Envoy proxies running alongside your service workloads. This enables the ability to customize gateway options. When using gateway injection, you must create the following resources in the namespace where you want to run your gateway proxy: Service , Deployment , Role , and RoleBinding . 1.2.2.3.4. Istio 1.14 Support Service Mesh 2.3 is based on Istio 1.14, which brings in new features and product enhancements. While many Istio 1.14 features are supported, the following exceptions should be noted: ProxyConfig API is supported with the exception of the image field. Telemetry API is a Technology Preview feature. SPIRE runtime is not a supported feature. 1.2.2.3.5. OpenShift Service Mesh Console Important OpenShift Service Mesh Console is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This release introduces a Technology Preview version of the OpenShift Container Platform Service Mesh Console, which integrates the Kiali interface directly into the OpenShift web console. For additional information, see Introducing the OpenShift Service Mesh Console (A Technology Preview) 1.2.2.3.6. Cluster-Wide deployment This release introduces cluster-wide deployment as a Technology Preview feature. A cluster-wide deployment contains a Service Mesh Control Plane that monitors resources for an entire cluster. The control plane uses a single query across all namespaces to monitor each Istio or Kubernetes resource kind that affects the mesh configuration. In contrast, the multitenant approach uses a query per namespace for each resource kind. Reducing the number of queries the control plane performs in a cluster-wide deployment improves performance. 1.2.2.3.6.1. Configuring cluster-wide deployment The following example ServiceMeshControlPlane object configures a cluster-wide deployment. To create an SMCP for cluster-wide deployment, a user must belong to the cluster-admin ClusterRole. If the SMCP is configured for cluster-wide deployment, it must be the only SMCP in the cluster. You cannot change the control plane mode from multitenant to cluster-wide (or from cluster-wide to multitenant). If a multitenant control plane already exists, delete it and create a new one. This example configures the SMCP for cluster-wide deployment. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1 1 Enables Istiod to monitor resources at the cluster level rather than monitor each individual namespace. Additionally, the SMMR must also be configured for cluster-wide deployment. This example configures the SMMR for cluster-wide deployment. apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1 1 Adds all namespaces to the mesh, including any namespaces you subsequently create. The following namespaces are not part of the mesh: kube, openshift, kube-* and openshift-*. 1.2.2.4. New features Red Hat OpenShift Service Mesh version 2.2.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.4.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.6 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.39 Kiali 1.48.4 1.2.2.5. New features Red Hat OpenShift Service Mesh version 2.2.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.5.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.5 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.39 Kiali 1.48.3 1.2.2.6. New features Red Hat OpenShift Service Mesh version 2.2.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.6.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.4 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.36.14 Kiali 1.48.3 1.2.2.7. New features Red Hat OpenShift Service Mesh version 2.2.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.7.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.3 Component Version Istio 1.12.9 Envoy Proxy 1.20.8 Jaeger 1.36 Kiali 1.48.3 1.2.2.8. New features Red Hat OpenShift Service Mesh version 2.2.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.8.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.2 Component Version Istio 1.12.7 Envoy Proxy 1.20.6 Jaeger 1.36 Kiali 1.48.2-1 1.2.2.8.2. Copy route labels With this enhancement, in addition to copying annotations, you can copy specific labels for an OpenShift route. Red Hat OpenShift Service Mesh copies all labels and annotations present in the Istio Gateway resource (with the exception of annotations starting with kubectl.kubernetes.io) into the managed OpenShift Route resource. 1.2.2.9. New features Red Hat OpenShift Service Mesh version 2.2.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.9.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2.1 Component Version Istio 1.12.7 Envoy Proxy 1.20.6 Jaeger 1.34.1 Kiali 1.48.2-1 1.2.2.10. New features Red Hat OpenShift Service Mesh 2.2 This release of Red Hat OpenShift Service Mesh adds new features and enhancements, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.10.1. Component versions included in Red Hat OpenShift Service Mesh version 2.2 Component Version Istio 1.12.7 Envoy Proxy 1.20.4 Jaeger 1.34.1 Kiali 1.48.0.16 1.2.2.10.2. WasmPlugin API This release adds support for the WasmPlugin API and deprecates the ServiceMeshExtension API. 1.2.2.10.3. ROSA support This release introduces service mesh support for Red Hat OpenShift on AWS (ROSA), including multi-cluster federation. 1.2.2.10.4. istio-node DaemonSet renamed This release, the istio-node DaemonSet is renamed to istio-cni-node to match the name in upstream Istio. 1.2.2.10.5. Envoy sidecar networking changes Istio 1.10 updated Envoy to send traffic to the application container using eth0 rather than lo by default. 1.2.2.10.6. Service Mesh Control Plane 1.1 This release marks the end of support for Service Mesh Control Planes based on Service Mesh 1.1 for all platforms. 1.2.2.10.7. Istio 1.12 Support Service Mesh 2.2 is based on Istio 1.12, which brings in new features and product enhancements. While many Istio 1.12 features are supported, the following unsupported features should be noted: AuthPolicy Dry Run is a tech preview feature. gRPC Proxyless Service Mesh is a tech preview feature. Telemetry API is a tech preview feature. Discovery selectors is not a supported feature. External control plane is not a supported feature. Gateway injection is not a supported feature. 1.2.2.10.8. Kubernetes Gateway API Important Kubernetes Gateway API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Kubernetes Gateway API is a technology preview feature that is disabled by default. If the Kubernetes API deployment controller is disabled, you must manually deploy and link an ingress gateway to the created Gateway object. If the Kubernetes API deployment controller is enabled, then an ingress gateway automatically deploys when a Gateway object is created. 1.2.2.10.8.1. Installing the Gateway API CRDs The Gateway API CRDs do not come pre-installed by default on OpenShift clusters. Install the CRDs prior to enabling Gateway API support in the SMCP. USD kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0" | kubectl apply -f -; } 1.2.2.10.8.2. Enabling Kubernetes Gateway API To enable the feature, set the following environment variables for the Istiod container in ServiceMeshControlPlane : spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: "true" PILOT_ENABLE_GATEWAY_API_STATUS: "true" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: "true" Restricting route attachment on Gateway API listeners is possible using the SameNamespace or All settings. Istio ignores usage of label selectors in listeners.allowedRoutes.namespaces and reverts to the default behavior ( SameNamespace ). 1.2.2.10.8.3. Manually linking an existing gateway to a Gateway resource If the Kubernetes API deployment controller is disabled, you must manually deploy and then link an ingress gateway to the created Gateway resource. apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname 1.2.2.11. New features Red Hat OpenShift Service Mesh 2.1.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.11.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.6 Component Version Istio 1.9.9 Envoy Proxy 1.17.5 Jaeger 1.36 Kiali 1.36.15 1.2.2.12. New features Red Hat OpenShift Service Mesh 2.1.5.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), contains bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.12.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.5.2 Component Version Istio 1.9.9 Envoy Proxy 1.17.5 Jaeger 1.36 Kiali 1.24.17 1.2.2.13. New features Red Hat OpenShift Service Mesh 2.1.5.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.13.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.5.1 Component Version Istio 1.9.9 Envoy Proxy 1.17.5 Jaeger 1.36 Kiali 1.36.13 1.2.2.14. New features Red Hat OpenShift Service Mesh 2.1.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.14.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.5 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.36 Kiali 1.36.12-1 1.2.2.15. New features Red Hat OpenShift Service Mesh 2.1.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.15.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.4 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.2 Kiali 1.36.12-1 1.2.2.16. New features Red Hat OpenShift Service Mesh 2.1.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.16.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.3 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.2 Kiali 1.36.10-2 1.2.2.17. New features Red Hat OpenShift Service Mesh 2.1.2.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.17.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.2.1 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.2 Kiali 1.36.9 1.2.2.18. New features Red Hat OpenShift Service Mesh 2.1.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. With this release, the Red Hat OpenShift distributed tracing platform Operator is now installed to the openshift-distributed-tracing namespace by default. Previously the default installation had been in the openshift-operator namespace. 1.2.2.18.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.2 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.30.1 Kiali 1.36.8 1.2.2.19. New features Red Hat OpenShift Service Mesh 2.1.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also adds the ability to disable the automatic creation of network policies. 1.2.2.19.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1.1 Component Version Istio 1.9.9 Envoy Proxy 1.17.1 Jaeger 1.24.1 Kiali 1.36.7 1.2.2.19.2. Disabling network policies Red Hat OpenShift Service Mesh automatically creates and manages a number of NetworkPolicies resources in the Service Mesh control plane and application namespaces. This is to ensure that applications and the control plane can communicate with each other. If you want to disable the automatic creation and management of NetworkPolicies resources, for example to enforce company security policies, you can do so. You can edit the ServiceMeshControlPlane to set the spec.security.manageNetworkPolicy setting to false Note When you disable spec.security.manageNetworkPolicy Red Hat OpenShift Service Mesh will not create any NetworkPolicy objects. The system administrator is responsible for managing the network and fixing any issues this might cause. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the project where you installed the Service Mesh control plane, for example istio-system , from the Project menu. Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane , for example basic-install . On the Create ServiceMeshControlPlane Details page, click YAML to modify your configuration. Set the ServiceMeshControlPlane field spec.security.manageNetworkPolicy to false , as shown in this example. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false Click Save . 1.2.2.20. New features and enhancements Red Hat OpenShift Service Mesh 2.1 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.9.8, Envoy Proxy 1.17.1, Jaeger 1.24.1, and Kiali 1.36.5 on OpenShift Container Platform 4.6 EUS, 4.7, 4.8, 4.9, along with new features and enhancements. 1.2.2.20.1. Component versions included in Red Hat OpenShift Service Mesh version 2.1 Component Version Istio 1.9.6 Envoy Proxy 1.17.1 Jaeger 1.24.1 Kiali 1.36.5 1.2.2.20.2. Service Mesh Federation New Custom Resource Definitions (CRDs) have been added to support federating service meshes. Service meshes may be federated both within the same cluster or across different OpenShift clusters. These new resources include: ServiceMeshPeer - Defines a federation with a separate service mesh, including gateway configuration, root trust certificate configuration, and status fields. In a pair of federated meshes, each mesh will define its own separate ServiceMeshPeer resource. ExportedServiceMeshSet - Defines which services for a given ServiceMeshPeer are available for the peer mesh to import. ImportedServiceSet - Defines which services for a given ServiceMeshPeer are imported from the peer mesh. These services must also be made available by the peer's ExportedServiceMeshSet resource. Service Mesh Federation is not supported between clusters on Red Hat OpenShift Service on AWS (ROSA), Azure Red Hat OpenShift (ARO), or OpenShift Dedicated (OSD). 1.2.2.20.3. OVN-Kubernetes Container Network Interface (CNI) generally available The OVN-Kubernetes Container Network Interface (CNI) was previously introduced as a Technology Preview feature in Red Hat OpenShift Service Mesh 2.0.1 and is now generally available in Red Hat OpenShift Service Mesh 2.1 and 2.0.x for use on OpenShift Container Platform 4.7.32, OpenShift Container Platform 4.8.12, and OpenShift Container Platform 4.9. 1.2.2.20.4. Service Mesh WebAssembly (WASM) Extensions The ServiceMeshExtensions Custom Resource Definition (CRD), first introduced in 2.0 as Technology Preview, is now generally available. You can use CRD to build your own plugins, but Red Hat does not provide support for the plugins you create. Mixer has been completely removed in Service Mesh 2.1. Upgrading from a Service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled. Mixer plugins will need to be ported to WebAssembly Extensions. 1.2.2.20.5. 3scale WebAssembly Adapter (WASM) With Mixer now officially removed, OpenShift Service Mesh 2.1 does not support the 3scale mixer adapter. Before upgrading to Service Mesh 2.1, remove the Mixer-based 3scale adapter and any additional Mixer plugins. Then, manually install and configure the new 3scale WebAssembly adapter with Service Mesh 2.1+ using a ServiceMeshExtension resource. 3scale 2.11 introduces an updated Service Mesh integration based on WebAssembly . 1.2.2.20.6. Istio 1.9 Support Service Mesh 2.1 is based on Istio 1.9, which brings in a large number of new features and product enhancements. While the majority of Istio 1.9 features are supported, the following exceptions should be noted: Virtual Machine integration is not yet supported Kubernetes Gateway API is not yet supported Remote fetch and load of WebAssembly HTTP filters are not yet supported Custom CA Integration using the Kubernetes CSR API is not yet supported Request Classification for monitoring traffic is a tech preview feature Integration with external authorization systems via Authorization policy's CUSTOM action is a tech preview feature 1.2.2.20.7. Improved Service Mesh operator performance The amount of time Red Hat OpenShift Service Mesh uses to prune old resources at the end of every ServiceMeshControlPlane reconciliation has been reduced. This results in faster ServiceMeshControlPlane deployments, and allows changes applied to existing SMCPs to take effect more quickly. 1.2.2.20.8. Kiali updates Kiali 1.36 includes the following features and enhancements: Service Mesh troubleshooting functionality Control plane and gateway monitoring Proxy sync statuses Envoy configuration views Unified view showing Envoy proxy and application logs interleaved Namespace and cluster boxing to support federated service mesh views New validations, wizards, and distributed tracing enhancements 1.2.2.21. New features Red Hat OpenShift Service Mesh 2.0.11.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.21.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.11.1 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.36 Kiali 1.24.17 1.2.2.22. New features Red Hat OpenShift Service Mesh 2.0.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs), bug fixes, and is supported on OpenShift Container Platform 4.9 or later. 1.2.2.22.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.11 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.36 Kiali 1.24.16-1 1.2.2.23. New features Red Hat OpenShift Service Mesh 2.0.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.23.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.10 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.28.0 Kiali 1.24.16-1 1.2.2.24. New features Red Hat OpenShift Service Mesh 2.0.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.24.1. Component versions included in Red Hat OpenShift Service Mesh version 2.0.9 Component Version Istio 1.6.14 Envoy Proxy 1.14.5 Jaeger 1.24.1 Kiali 1.24.11 1.2.2.25. New features Red Hat OpenShift Service Mesh 2.0.8 This release of Red Hat OpenShift Service Mesh addresses bug fixes. 1.2.2.26. New features Red Hat OpenShift Service Mesh 2.0.7.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 1.2.2.26.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156 , where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile . In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1 ), possibly leading to a security incident. You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths , or ALLOW actions and operation.notPaths . With the mitigation, the fragment part of the request's URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part. To opt-out from the new behavior in the mitigation, the fragment section in the URI will be kept. You can configure your ServiceMeshControlPlane to keep URI fragments. Warning Disabling the new behavior will normalize your paths as described above and is considered unsafe. Ensure that you have accommodated for this in any security policies before opting to keep URI fragments. Example ServiceMeshControlPlane modification apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: "false" 1.2.2.26.2. Required update for authorization policies Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields. Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts . You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example. First example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: hosts: ["httpbin.com","httpbin.com:*"] Second example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: ["httpbin.example.com:*"] 1.2.2.27. New features Red Hat OpenShift Service Mesh 2.0.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.28. Red Hat OpenShift Service Mesh on Red Hat OpenShift Dedicated and Microsoft Azure Red Hat OpenShift Red Hat OpenShift Service Mesh is now supported through Red Hat OpenShift Dedicated and Microsoft Azure Red Hat OpenShift. 1.2.2.29. New features Red Hat OpenShift Service Mesh 2.0.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.30. New features Red Hat OpenShift Service Mesh 2.0.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.31. New features Red Hat OpenShift Service Mesh 2.0.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Important There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. 1.2.2.31.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920 Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters ( %2F or %5C ) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin . A request sent to the URL path //admin will NOT be rejected by the authorization policy. According to RFC 3986 , the path //admin with multiple slashes should technically be treated as a different path from the /admin . However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy ( //admin does not match /admin ), and a user can access the resource at path /admin in the backend; this would represent a security incident. Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses. Your cluster is NOT impacted by this vulnerability if: You don't have authorization policies. Your authorization policies don't define paths or notPaths fields. Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases. Note The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. 1.2.2.31.2. Updating the path normalization configuration Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization , also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests: Table 1.1. Normalization schemes Option Description Example Notes NONE No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. ../%2Fa../b is evaluated by the authorization policies and sent to your service. This setting is vulnerable to CVE-2021-31920. BASE This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes. /a/../b is normalized to /b . \da is normalized to /da . This setting is vulnerable to CVE-2021-31920. MERGE_SLASHES Slashes are merged after the BASE normalization. /a//b is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. DECODE_AND_MERGE_SLASHES The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( %2F , %2f , %5C and %5c ) are decoded to / or \ , before the MERGE_SLASHES normalization. /a%2fb is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. The normalization algorithms are conducted in the following order: Percent-decode %2F , %2f , %5C and %5c . The RFC 3986 and other normalization implemented by the normalize_path option in Envoy. Merge slashes. Warning While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. 1.2.2.31.3. Path normalization configuration examples Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be: Used to check against the authorization policies. Forwarded to the backend application. Table 1.2. Configuration examples If your application... Choose... Relies on the proxy to do normalization BASE , MERGE_SLASHES or DECODE_AND_MERGE_SLASHES Normalizes request paths based on RFC 3986 and does not merge slashes. BASE Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. MERGE_SLASHES Normalizes request paths based on RFC 3986 , decodes percent-encoded slashes, and merges slashes. DECODE_AND_MERGE_SLASHES Processes request paths in a way that is incompatible with RFC 3986 . NONE 1.2.2.31.4. Configuring your SMCP for path normalization To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane . Use the configuration examples to help determine the settings for your system. SMCP v2 pathNormalization spec: techPreview: global: pathNormalization: <option> 1.2.2.31.5. Configuring for case normalization In some environments, it may be useful to have paths in authorization policies compared in a case insensitive manner. For example, treating https://myurl/get and https://myurl/GeT as equivalent. In those cases, you can use the EnvoyFilter shown below. This filter will change both the path used for comparison and the path presented to the application. In this example, istio-system is the name of the Service Mesh control plane project. Save the EnvoyFilter to a file and run the following command: USD oc create -f <myEnvoyFilterFile> apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: "envoy.filters.network.http_connection_manager" subFilter: name: "envoy.filters.http.router" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: "@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(":path") request_handle:headers():replace(":path", string.lower(path)) end 1.2.2.32. New features Red Hat OpenShift Service Mesh 2.0.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. In addition, this release has the following new features: Added an option to the must-gather data collection tool that gathers information from a specified Service Mesh control plane namespace. For more information, see OSSM-351 . Improved performance for Service Mesh control planes with hundreds of namespaces 1.2.2.33. New features Red Hat OpenShift Service Mesh 2.0.2 This release of Red Hat OpenShift Service Mesh adds support for IBM Z and IBM Power Systems. It also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.34. New features Red Hat OpenShift Service Mesh 2.0.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.2.2.35. New features Red Hat OpenShift Service Mesh 2.0 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.6.5, Jaeger 1.20.0, Kiali 1.24.2, and the 3scale Istio Adapter 2.0 and OpenShift Container Platform 4.6. In addition, this release has the following new features: Simplifies installation, upgrades, and management of the Service Mesh control plane. Reduces the Service Mesh control plane's resource usage and startup time. Improves performance by reducing inter-control plane communication over networking. Adds support for Envoy's Secret Discovery Service (SDS). SDS is a more secure and efficient mechanism for delivering secrets to Envoy side car proxies. Removes the need to use Kubernetes Secrets, which have well known security risks. Improves performance during certificate rotation, as proxies no longer require a restart to recognize new certificates. Adds support for Istio's Telemetry v2 architecture, which is built using WebAssembly extensions. This new architecture brings significant performance improvements. Updates the ServiceMeshControlPlane resource to v2 with a streamlined configuration to make it easier to manage the Service Mesh Control Plane. Introduces WebAssembly extensions as a Technology Preview feature. 1.2.3. Technology Preview Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2.4. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Removed functionality no longer exists in the product. 1.2.4.1. Deprecated and removed features Red Hat OpenShift Service Mesh 2.3 Support for the following cipher suites has been deprecated. In a future release, they will be removed from the default list of ciphers used in TLS negotiations on both the client and server sides. ECDHE-ECDSA-AES128-SHA ECDHE-RSA-AES128-SHA AES128-GCM-SHA256 AES128-SHA ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA AES256-GCM-SHA384 AES256-SHA The ServiceMeshExtension API, which was deprecated in Red Hat OpenShift Service Mesh version 2.2, was removed in Red Hat OpenShift Service Mesh version 2.3. If you are using the ServiceMeshExtension API, you must migrate to the WasmPlugin API to continue using your WebAssembly extensions. 1.2.4.2. Deprecated features Red Hat OpenShift Service Mesh 2.2 The ServiceMeshExtension API is deprecated as of release 2.2 and will be removed in a future release. While ServiceMeshExtension API is still supported in release 2.2, customers should start moving to the new WasmPlugin API. 1.2.4.3. Removed features Red Hat OpenShift Service Mesh 2.2 This release marks the end of support for Service Mesh control planes based on Service Mesh 1.1 for all platforms. 1.2.4.4. Removed features Red Hat OpenShift Service Mesh 2.1 In Service Mesh 2.1, the Mixer component is removed. Bug fixes and support is provided through the end of the Service Mesh 2.0 life cycle. Upgrading from a Service Mesh 2.0.x release to 2.1 will not proceed if Mixer plugins are enabled. Mixer plugins must be ported to WebAssembly Extensions. 1.2.4.5. Deprecated features Red Hat OpenShift Service Mesh 2.0 The Mixer component was deprecated in release 2.0 and will be removed in release 2.1. While using Mixer for implementing extensions was still supported in release 2.0, extensions should have been migrated to the new WebAssembly mechanism. The following resource types are no longer supported in Red Hat OpenShift Service Mesh 2.0: Policy (authentication.istio.io/v1alpha1) is no longer supported. Depending on the specific configuration in your Policy resource, you may have to configure multiple resources to achieve the same effect. Use RequestAuthentication (security.istio.io/v1beta1) Use PeerAuthentication (security.istio.io/v1beta1) ServiceMeshPolicy (maistra.io/v1) is no longer supported. Use RequestAuthentication or PeerAuthentication , as mentioned above, but place in the Service Mesh control plane namespace. RbacConfig (rbac.istio.io/v1alpha1) is no longer supported. Replaced by AuthorizationPolicy (security.istio.io/v1beta1), which encompasses behavior of RbacConfig , ServiceRole , and ServiceRoleBinding . ServiceMeshRbacConfig (maistra.io/v1) is no longer supported. Use AuthorizationPolicy as above, but place in Service Mesh control plane namespace. ServiceRole (rbac.istio.io/v1alpha1) is no longer supported. ServiceRoleBinding (rbac.istio.io/v1alpha1) is no longer supported. In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers. 1.2.5. Known issues These limitations exist in Red Hat OpenShift Service Mesh: Red Hat OpenShift Service Mesh does not yet support IPv6 , as it is not yet fully supported by the upstream Istio project. As a result, Red Hat OpenShift Service Mesh does not support dual-stack clusters. Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu. The first time you access related services such as distributed tracing platform and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console. The Bookinfo sample application cannot be installed on IBM Z and IBM Power. WebAssembly extensions are not supported on IBM Z and IBM Power. LuaJIT is not supported on IBM Power. 1.2.5.1. Service Mesh known issues These are the known issues in Red Hat OpenShift Service Mesh: OSSM-2221 Gateway injection does not work in control plane namespace. If you use the Gateway injection feature to create a gateway in the same location as the control plane, the injection fails and OpenShift generates this message: Warning Failed 10s kubelet, ocp-wide-vh8fd-worker-vhqm9 Failed to pull image "auto": rpc error: code = Unknown desc = reading manifest latest in docker.io/library/auto: errors To create a gateway in the control plane namespace, use the gateways parameter in the SMCP spec to configure ingress and egress gateways for the mesh. OSSM-2042 Deployment of SMCP named default fails. If you are creating an SMCP object, and set its version field to v2.3, the name of the object cannot be default . If the name is default , then the control plane fails to deploy, and OpenShift generates a Warning event with the following message: Error processing component mesh-config: error: [mesh-config/templates/telemetryv2_1.6.yaml: Internal error occurred: failed calling webhook "rev.validation.istio.io": Post "https://istiod-default.istio-system.svc:443/validate?timeout=10s": x509: certificate is valid for istiod.istio-system.svc, istiod-remote.istio-system.svc, istio-pilot.istio-system.svc, not istiod-default.istio-system.svc, mesh-config/templates/enable-mesh-permissive.yaml OSSM-1655 Kiali dashboard shows error after enabling mTLS in SMCP . After enabling the spec.security.controlPlane.mtls setting in the SMCP, the Kiali console displays the following error message No subsets defined . OSSM-1505 This issue only occurs when using the ServiceMeshExtension resource on OpenShift Container Platform 4.11. When you use ServiceMeshExtension on OpenShift Container Platform 4.11 the resource never becomes ready. If you inspect the issue using oc describe ServiceMeshExtension you will see the following error: stderr: Error creating mount namespace before pivot: function not implemented . Workaround: ServiceMeshExtension was deprecated in Service Mesh 2.2. Migrate from ServiceMeshExtension to the WasmPlugin resource. For more information, see Migrating from ServiceMeshExtension to WasmPlugin resources. OSSM-1396 If a gateway resource contains the spec.externalIPs setting, instead of being recreated when the ServiceMeshControlPlane is updated, the gateway is removed and never recreated. OSSM-1168 When service mesh resources are created as a single YAML file, the Envoy proxy sidecar is not reliably injected into pods. When the SMCP, SMMR, and Deployment resources are created individually, the deployment works as expected. OSSM-1115 The concurrency field of the spec.proxy API did not propagate to the istio-proxy. The concurrency field works when set with ProxyConfig . The concurrency field specifies the number of worker threads to run. If the field is set to 0 , then the number of worker threads available is equal to the number of CPU cores. If the field is not set, then the number of worker threads available defaults to 2 . In the following example, the concurrency field is set to 0 . apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0 OSSM-1052 When configuring a Service ExternalIP for the ingressgateway in the Service Mesh control plane, the service is not created. The schema for the SMCP is missing the parameter for the service. Workaround: Disable the gateway creation in the SMCP spec and manage the gateway deployment entirely manually (including Service, Role and RoleBinding). OSSM-882 This applies for Service Mesh 2.1 and earlier. Namespace is in the accessible_namespace list but does not appear in Kiali UI. By default, Kiali will not show any namespaces that start with "kube" because these namespaces are typically internal-use only and not part of a mesh. For example, if you create a namespace called 'akube-a' and add it to the Service Mesh member roll, then the Kiali UI does not display the namespace. For defined exclusion patterns, the software excludes namespaces that start with or contain the pattern. Workaround: Change the Kiali Custom Resource setting so it prefixes the setting with a carat (^). For example: api: namespaces: exclude: - "^istio-operator" - "^kube-.*" - "^openshift.*" - "^ibm.*" - "^kiali-operator" MAISTRA-2692 With Mixer removed, custom metrics that have been defined in Service Mesh 2.0.x cannot be used in 2.1. Custom metrics can be configured using EnvoyFilter . Red Hat is unable to support EnvoyFilter configuration except where explicitly documented. This is due to tight coupling with the underlying Envoy APIs, meaning that backward compatibility cannot be maintained. MAISTRA-2648 ServiceMeshExtensions are currently not compatible with meshes deployed on IBM Z Systems. MAISTRA-1959 Migration to 2.0 Prometheus scraping ( spec.addons.prometheus.scrape set to true ) does not work when mTLS is enabled. Additionally, Kiali displays extraneous graph data when mTLS is disabled. This problem can be addressed by excluding port 15020 from proxy configuration, for example, spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020 MAISTRA-1314 Red Hat OpenShift Service Mesh does not yet support IPv6. MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur. MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning. 1.2.5.2. Kiali known issues Note New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali . These are the known issues in Kiali: KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the "View in Grafana" link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time. KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser. 1.2.5.3. Red Hat OpenShift distributed tracing known issues These limitations exist in Red Hat OpenShift distributed tracing: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems. These are the known issues for Red Hat OpenShift distributed tracing: OBSDA-220 In some cases, if you try to pull an image using distributed tracing data collection, the image pull fails and a Failed to pull image error message appears. There is no workaround for this issue. TRACING-2057 The Kafka API has been updated to v1beta2 to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services: Jaeger Operator channel: 1.17.x stable or 1.20.x stable AMQ Streams Operator channel: amq-streams-1.6.x To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either amq-streams-1.7.x or stable . 1.2.6. Fixed issues The following issues been resolved in the current release: 1.2.6.1. Service Mesh fixed issues OSSM-3025 Istiod sometimes fails to become ready. Sometimes, when a mesh contained many member namespaces, the Istiod pod did not become ready due to a deadlock within Istiod. The deadlock is now resolved and the pod now starts as expected. OSSM-2493 Default nodeSelector and tolerations in SMCP not passed to Kiali. The nodeSelector and tolerations you add to SMCP.spec.runtime.defaults are now passed to the Kiali resource. OSSM-2492 Default tolerations in SMCP not passed to Jaeger. The nodeSelector and tolerations you add to SMCP.spec.runtime.defaults are now passed to the Jaeger resource. OSSM-2374 If you deleted one of the ServiceMeshMember resources, then the Service Mesh operator deleted the ServiceMeshMemberRoll . While this is expected behavior when you delete the last ServiceMeshMember , the operator should not delete the ServiceMeshMemberRoll if it contains any members in addition to the one that was deleted. This issue is now fixed and the operator only deletes the ServiceMeshMemberRoll when the last ServiceMeshMember resource is deleted. OSSM-2373 Error trying to get OAuth metadata when logging in. To fetch the cluster version, the system:anonymous account is used. With the cluster's default bundled ClusterRoles and ClusterRoleBinding, the anonymous account can fetch the version correctly. If the system:anonymous account loses its privileges to fetch the cluster version, OpenShift authentication becomes unusable. This is fixed by using the Kiali SA to fetch the cluster version. This also allows for improved security on the cluster. OSSM-2371 Despite Kiali being configured as "view-only," a user can change the proxy logging level via the Workload details' Logs tab's kebab menu. This issue has been fixed so the options under "Set Proxy Log Level" are disabled when Kiali is configured as "view-only." OSSM-2344 Restarting Istiod causes Kiali to flood CRI-O with port-forward requests. This issue occurred when Kiali could not connect to Istiod and Kiali simultaneously issued a large number of requests to istiod. Kiali now limits the number of requests it sends to istiod. OSSM-2335 Dragging the mouse pointer over the Traces scatterchart plot sometimes caused the Kiali console to stop responding due to concurrent backend requests. OSSM-2053 Using Red Hat OpenShift Service Mesh Operator 2.2 or 2.3, during SMCP reconciliation, the SMMR controller removed the member namespaces from SMMR.status.configuredMembers . This caused the services in the member namespaces to become unavailable for a few moments. Using Red Hat OpenShift Service Mesh Operator 2.2 or 2.3, the SMMR controller no longer removes the namespaces from SMMR.status.configuredMembers . Instead, the controller adds the namespaces to SMMR.status.pendingMembers to indicate that they are not up-to-date. During reconciliation, as each namespace synchronizes with the SMCP, the namespace is automatically removed from SMMR.status.pendingMembers . OSSM-1962 Use EndpointSlices in federation controller. The federation controller now uses EndpointSlices , which improves scalability and performance in large deployments. The PILOT_USE_ENDPOINT_SLICE flag is enabled by default. Disabling the flag prevents use of federation deployments. OSSM-1668 A new field spec.security.jwksResolverCA was added to the Version 2.1 SMCP but was missing in the 2.2.0 and 2.2.1 releases. When upgrading from an Operator version where this field was present to an Operator version that was missing this field, the .spec.security.jwksResolverCA field was not available in the SMCP . OSSM-1325 istiod pod crashes and displays the following error message: fatal error: concurrent map iteration and map write . OSSM-1211 Configuring Federated service meshes for failover does not work as expected. The Istiod pilot log displays the following error: envoy connection [C289] TLS error: 337047686:SSL routines:tls_process_server_certificate:certificate verify failed OSSM-1099 The Kiali console displayed the message Sorry, there was a problem. Try a refresh or navigate to a different page. OSSM-1074 Pod annotations defined in SMCP are not injected in the pods. OSSM-999 Kiali retention did not work as expected. Calendar times were greyed out in the dashboard graph. OSSM-797 Kiali Operator pod generates CreateContainerConfigError while installing or updating the operator. OSSM-722 Namespace starting with kube is hidden from Kiali. OSSM-569 There is no CPU memory limit for the Prometheus istio-proxy container. The Prometheus istio-proxy sidecar now uses the resource limits defined in spec.proxy.runtime.container . OSSM-535 Support validationMessages in SMCP. The ValidationMessages field in the Service Mesh Control Plane can now be set to True . This writes a log for the status of the resources, which can be helpful when troubleshooting problems. OSSM-449 VirtualService and Service causes an error "Only unique values for domains are permitted. Duplicate entry of domain." OSSM-419 Namespaces with similar names will all show in Kiali namespace list, even though namespaces may not be defined in Service Mesh Member Role. OSSM-296 When adding health configuration to the Kiali custom resource (CR) is it not being replicated to the Kiali configmap. OSSM-291 In the Kiali console, on the Applications, Services, and Workloads pages, the "Remove Label from Filters" function is not working. OSSM-289 In the Kiali console, on the Service Details pages for the 'istio-ingressgateway' and 'jaeger-query' services there are no Traces being displayed. The traces exist in Jaeger. OSSM-287 In the Kiali console there are no traces being displayed on the Graph Service. OSSM-285 When trying to access the Kiali console, receive the following error message "Error trying to get OAuth Metadata". Workaround: Restart the Kiali pod. MAISTRA-2735 The resources that the Service Mesh Operator deletes when reconciling the SMCP changed in Red Hat OpenShift Service Mesh version 2.1. Previously, the Operator deleted a resource with the following labels: maistra.io/owner app.kubernetes.io/version Now, the Operator ignores resources that does not also include the app.kubernetes.io/managed-by=maistra-istio-operator label. If you create your own resources, you should not add the app.kubernetes.io/managed-by=maistra-istio-operator label to them. MAISTRA-2687 Red Hat OpenShift Service Mesh 2.1 federation gateway does not send the full certificate chain when using external certificates. The Service Mesh federation egress gateway only sends the client certificate. Because the federation ingress gateway only knows about the root certificate, it cannot verify the client certificate unless you add the root certificate to the federation import ConfigMap . MAISTRA-2635 Replace deprecated Kubernetes API. To remain compatible with OpenShift Container Platform 4.8, the apiextensions.k8s.io/v1beta1 API was deprecated as of Red Hat OpenShift Service Mesh 2.0.8. MAISTRA-2631 The WASM feature is not working because podman is failing due to nsenter binary not being present. Red Hat OpenShift Service Mesh generates the following error message: Error: error configuring CNI network plugin exec: "nsenter": executable file not found in USDPATH . The container image now contains nsenter and WASM works as expected. MAISTRA-2534 When istiod attempted to fetch the JWKS for an issuer specified in a JWT rule, the issuer service responded with a 502. This prevented the proxy container from becoming ready and caused deployments to hang. The fix for the community bug has been included in the Service Mesh 2.0.7 release. MAISTRA-2411 When the Operator creates a new ingress gateway using spec.gateways.additionaIngress in the ServiceMeshControlPlane , Operator is not creating a NetworkPolicy for the additional ingress gateway like it does for the default istio-ingressgateway. This is causing a 503 response from the route of the new gateway. Workaround: Manually create the NetworkPolicy in the <istio-system> namespace. MAISTRA-2401 CVE-2021-3586 servicemesh-operator: NetworkPolicy resources incorrectly specified ports for ingress resources. The NetworkPolicy resources installed for Red Hat OpenShift Service Mesh did not properly specify which ports could be accessed. This allowed access to all ports on these resources from any pod. Network policies applied to the following resources are affected: Galley Grafana Istiod Jaeger Kiali Prometheus Sidecar injector MAISTRA-2378 When the cluster is configured to use OpenShift SDN with ovs-multitenant and the mesh contains a large number of namespaces (200+), the OpenShift Container Platform networking plugin is unable to configure the namespaces quickly. Service Mesh times out causing namespaces to be continuously dropped from the service mesh and then reenlisted. MAISTRA-2370 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine. MAISTRA-2117 Add optional ConfigMap mount to operator. The CSV now contains an optional ConfigMap volume mount, which mounts the smcp-templates ConfigMap if it exists. If the smcp-templates ConfigMap does not exist, the mounted directory is empty. When you create the ConfigMap , the directory is populated with the entries from the ConfigMap and can be referenced in SMCP.spec.profiles . No restart of the Service Mesh operator is required. Customers using the 2.0 operator with a modified CSV to mount the smcp-templates ConfigMap can upgrade to Red Hat OpenShift Service Mesh 2.1. After upgrading, you can continue using an existing ConfigMap, and the profiles it contains, without editing the CSV. Customers that previously used ConfigMap with a different name will either have to rename the ConfigMap or update the CSV after upgrading. MAISTRA-2010 AuthorizationPolicy does not support request.regex.headers field. The validatingwebhook rejects any AuthorizationPolicy with the field, and even if you disable that, Pilot tries to validate it using the same code, and it does not work. MAISTRA-1979 Migration to 2.0 The conversion webhook drops the following important fields when converting SMCP.status from v2 to v1: conditions components observedGeneration annotations Upgrading the operator to 2.0 might break client tools that read the SMCP status using the maistra.io/v1 version of the resource. This also causes the READY and STATUS columns to be empty when you run oc get servicemeshcontrolplanes.v1.maistra.io . MAISTRA-1947 Technology Preview Updates to ServiceMeshExtensions are not applied. Workaround: Remove and recreate the ServiceMeshExtensions . MAISTRA-1983 Migration to 2.0 Upgrading to 2.0.0 with an existing invalid ServiceMeshControlPlane cannot easily be repaired. The invalid items in the ServiceMeshControlPlane resource caused an unrecoverable error. The fix makes the errors recoverable. You can delete the invalid resource and replace it with a new one or edit the resource to fix the errors. For more information about editing your resource, see [Configuring the Red Hat OpenShift Service Mesh installation]. MAISTRA-1502 As a result of CVEs fixes in version 1.0.10, the Istio dashboards are not available from the Home Dashboard menu in Grafana. To access the Istio dashboards, click the Dashboard menu in the navigation panel and select the Manage tab. MAISTRA-1399 Red Hat OpenShift Service Mesh no longer prevents you from installing unsupported CNI protocols. The supported network configurations has not changed. MAISTRA-1089 Migration to 2.0 Gateways created in a non-control plane namespace are automatically deleted. After removing the gateway definition from the SMCP spec, you need to manually delete these resources. MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected: [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy. Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod. MAISTRA-681 When the Service Mesh control plane has many namespaces, it can lead to performance issues. MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel. Bugzilla 1821432 The toggle controls in OpenShift Container Platform Custom Resource details page does not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes updates the wrong field in the resource. To update a SMCP, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls. 1.2.6.2. Red Hat OpenShift distributed tracing fixed issues OSSM-1910 Because of an issue introduced in version 2.6, TLS connections could not be established with OpenShift Container Platform Service Mesh. This update resolves the issue by changing the service port names to match conventions used by OpenShift Container Platform Service Mesh and Istio. OBSDA-208 Before this update, the default 200m CPU and 256Mi memory resource limits could cause distributed tracing data collection to restart continuously on large clusters. This update resolves the issue by removing these resource limits. OBSDA-222 Before this update, spans could be dropped in the OpenShift Container Platform distributed tracing platform. To help prevent this issue from occurring, this release updates version dependencies. TRACING-2337 Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following: {"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true} This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port. TRACING-2009 The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0. TRACING-1907 The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect OwnerReference field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed. TRACING-1725 Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also BZ-1918920 . TRACING-1631 Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters. TRACING-1300 Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector. TRACING-1208 Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn't trust the custom CA bundle defined at installation time with the additionalTrustBundle . TRACING-1166 It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076 . TRACING-809 Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see Jaegertracing-1819 . BZ-1918920 / LOG-1619 The Elasticsearch pods does not get restarted automatically after an update. Workaround: Restart the pods manually. 1.3. Understanding Service Mesh Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment. 1.3.1. Understanding service mesh A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage. Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features. Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide: Discovery Load balancing Service-to-service authentication Failure recovery Metrics Monitoring Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Access control End-to-end authentication 1.3.2. Service Mesh architecture Service mesh technology operates at the network communication level. That is, service mesh components capture or intercept traffic to and from microservices, either modifying requests, redirecting them, or creating new requests to other services. At a high level, Red Hat OpenShift Service Mesh consists of a data plane and a control plane The data plane is a set of intelligent proxies, running alongside application containers in a pod, that intercept and control all inbound and outbound network communication between microservices in the service mesh. The data plane is implemented in such a way that it intercepts all inbound (ingress) and outbound (egress) network traffic. The Istio data plane is composed of Envoy containers running along side application containers in a pod. The Envoy container acts as a proxy, controlling all network communication into and out of the pod. Envoy proxies are the only Istio components that interact with data plane traffic. All incoming (ingress) and outgoing (egress) network traffic between services flows through the proxies. The Envoy proxy also collects all metrics related to services traffic within the mesh. Envoy proxies are deployed as sidecars, running in the same pod as services. Envoy proxies are also used to implement mesh gateways. Sidecar proxies manage inbound and outbound communication for their workload instance. Gateways are proxies operating as load balancers receiving incoming or outgoing HTTP/TCP connections. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. You use a Gateway to manage inbound and outbound traffic for your mesh, letting you specify which traffic you want to enter or leave the mesh. Ingress-gateway - Also known as an ingress controller, the Ingress Gateway is a dedicated Envoy proxy that receives and controls traffic entering the service mesh. An Ingress Gateway allows features such as monitoring and route rules to be applied to traffic entering the cluster. Egress-gateway - Also known as an egress controller, the Egress Gateway is a dedicated Envoy proxy that manages traffic leaving the service mesh. An Egress Gateway allows features such as monitoring and route rules to be applied to traffic exiting the mesh. The control plane manages and configures the proxies that make up the data plane. It is the authoritative source for configuration, manages access control and usage policies, and collects metrics from the proxies in the service mesh. The Istio control plane is composed of Istiod which consolidates several control plane components (Citadel, Galley, Pilot) into a single binary. Istiod provides service discovery, configuration, and certificate management. It converts high-level routing rules to Envoy configurations and propagates them to the sidecars at runtime. Istiod can act as a Certificate Authority (CA), generating certificates supporting secure mTLS communication in the data plane. You can also use an external CA for this purpose. Istiod is responsible for injecting sidecar proxy containers into workloads deployed to an OpenShift cluster. Red Hat OpenShift Service Mesh uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster, in this case, a Red Hat OpenShift Service Mesh installation. Red Hat OpenShift Service Mesh also bundles the following Istio add-ons as part of the product: Kiali - Kiali is the management console for Red Hat OpenShift Service Mesh. It provides dashboards, observability, and robust configuration and validation capabilities. It shows the structure of your service mesh by inferring traffic topology and displays the health of your mesh. Kiali provides detailed metrics, powerful validation, access to Grafana, and strong integration with the distributed tracing platform. Prometheus - Red Hat OpenShift Service Mesh uses Prometheus to store telemetry information from services. Kiali depends on Prometheus to obtain metrics, health status, and mesh topology. Jaeger - Red Hat OpenShift Service Mesh supports the distributed tracing platform. Jaeger is an open source traceability server that centralizes and displays traces associated with a single request between multiple services. Using the distributed tracing platform you can monitor and troubleshoot your microservices-based distributed systems. Elasticsearch - Elasticsearch is an open source, distributed, JSON-based search and analytics engine. The distributed tracing platform uses Elasticsearch for persistent storage. Grafana - Grafana provides mesh administrators with advanced query and metrics analysis and dashboards for Istio data. Optionally, Grafana can be used to analyze service mesh metrics. The following Istio integrations are supported with Red Hat OpenShift Service Mesh: 3scale - Istio provides an optional integration with Red Hat 3scale API Management solutions. For versions prior to 2.1, this integration was achieved via the 3scale Istio adapter. For version 2.1 and later, the 3scale integration is achieved via a WebAssembly module. For information about how to install the 3scale adapter, refer to the 3scale Istio adapter documentation 1.3.3. Understanding Kiali Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected. 1.3.3.1. Kiali overview Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh. Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console. Kiali is installed by default as part of the Red Hat OpenShift Service Mesh. 1.3.3.2. Kiali architecture Kiali is based on the open source Kiali project . Kiali is composed of two components: the Kiali application and the Kiali console. Kiali application (back end) - This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets. Kiali console (front end) - The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user. In addition, Kiali depends on external services and components provided by the container application platform and Istio. Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API. Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus. Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on. Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data. Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data. 1.3.3.3. Kiali features The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities: Health - Quickly identify issues with applications, services, or workloads. Topology - Visualize how your applications, services, or workloads communicate via the Kiali graph. Metrics - Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards. Tracing - Integration with Jaeger lets you follow the path of a request through various microservices that make up an application. Validations - Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on). Configuration - Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console. 1.3.4. Understanding distributed tracing Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. The distributed tracing platform lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together-usually executed in different processes or hosts-to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. The distributed tracing platform records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace comprises one or more spans. A span represents a logical unit of work that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. 1.3.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 1.3.4.2. Red Hat OpenShift distributed tracing architecture Red Hat OpenShift distributed tracing is made up of several components that work together to collect, store, and display tracing data. Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Client (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The distributed tracing platform clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Agent (Jaeger agent, Server Queue, Processor Workers) - The distributed tracing platform agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. Jaeger Collector (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Red Hat OpenShift distributed tracing platform has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Red Hat OpenShift distributed tracing can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. Jaeger Console - With the Red Hat OpenShift distributed tracing platform user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. 1.3.4.3. Red Hat OpenShift distributed tracing features Red Hat OpenShift distributed tracing provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing data from the Kiali console. High scalability - The distributed tracing back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 1.3.5. steps Prepare to install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 1.4. Service mesh deployment models Red Hat OpenShift Service Mesh supports several different deployment models that can be combined in different ways to best suit your business requirements. 1.4.1. Single mesh deployment model The simplest Istio deployment model is a single mesh. Service names within a mesh must be unique because Kubernetes only allows one service to be named myservice in the mynamespace namespace. However, workload instances can share a common identity since service account names can be shared across workloads in the same namespace 1.4.2. Single tenancy deployment model In Istio, a tenant is a group of users that share common access and privileges for a set of deployed workloads. You can use tenants to provide a level of isolation between different teams. You can segregate access to different tenants using NetworkPolicies , AuthorizationPolicies , and exportTo annotations on istio.io or service resources. Single tenant, cluster-wide Service Mesh control plane configurations are deprecated as of Red Hat OpenShift Service Mesh version 1.0. Red Hat OpenShift Service Mesh defaults to a multitenant model. 1.4.3. Multitenant deployment model Red Hat OpenShift Service Mesh installs a ServiceMeshControlPlane that is configured for multitenancy by default. Red Hat OpenShift Service Mesh uses a multitenant Operator to manage the Service Mesh control plane lifecycle. Within a mesh, namespaces are used for tenancy. Red Hat OpenShift Service Mesh uses ServiceMeshControlPlane resources to manage mesh installations, whose scope is limited by default to namespace that contains the resource. You use ServiceMeshMemberRoll and ServiceMeshMember resources to include additional namespaces into the mesh. A namespace can only be included in a single mesh, and multiple meshes can be installed in a single OpenShift cluster. Typical service mesh deployments use a single Service Mesh control plane to configure communication between services in the mesh. Red Hat OpenShift Service Mesh supports "soft multitenancy", where there is one control plane and one mesh per tenant, and there can be multiple independent control planes within the cluster. Multitenant deployments specify the projects that can access the Service Mesh and isolate the Service Mesh from other control plane instances. The cluster administrator gets control and visibility across all the Istio control planes, while the tenant administrator only gets control over their specific Service Mesh, Kiali, and Jaeger instances. You can grant a team permission to deploy its workloads only to a given namespace or set of namespaces. If granted the mesh-user role by the service mesh administrator, users can create a ServiceMeshMember resource to add namespaces to the ServiceMeshMemberRoll . 1.4.4. Multimesh or federated deployment model Federation is a deployment model that lets you share services and workloads between separate meshes managed in distinct administrative domains. The Istio multi-cluster model requires a high level of trust between meshes and remote access to all Kubernetes API servers on which the individual meshes reside. Red Hat OpenShift Service Mesh federation takes an opinionated approach to a multi-cluster implementation of Service Mesh that assumes minimal trust between meshes. A federated mesh is a group of meshes behaving as a single mesh. The services in each mesh can be unique services, for example a mesh adding services by importing them from another mesh, can provide additional workloads for the same services across the meshes, providing high availability, or a combination of both. All meshes that are joined into a federated mesh remain managed individually, and you must explicitly configure which services are exported to and imported from other meshes in the federation. Support functions such as certificate generation, metrics and trace collection remain local in their respective meshes. 1.5. Service Mesh and Istio differences Red Hat OpenShift Service Mesh differs from an installation of Istio to provide additional features or to handle differences when deploying on OpenShift Container Platform. 1.5.1. Differences between Istio and Red Hat OpenShift Service Mesh The following features are different in Service Mesh and Istio. 1.5.1.1. Command line tool The command line tool for Red Hat OpenShift Service Mesh is oc . Red Hat OpenShift Service Mesh does not support istioctl . 1.5.1.2. Installation and upgrades Red Hat OpenShift Service Mesh does not support Istio installation profiles. Red Hat OpenShift Service Mesh does not support canary upgrades of the service mesh. 1.5.1.3. Automatic injection The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled. Red Hat OpenShift Service Mesh does not automatically inject the sidecar into any pods, but you must opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift Container Platform capabilities such as builder pods. To enable automatic injection, specify the sidecar.istio.io/inject label, or annotation, as described in the Automatic sidecar injection section. Table 1.3. Sidecar injection label and annotation settings Upstream Istio Red Hat OpenShift Service Mesh Namespace Label supports "enabled" and "disabled" supports "disabled" Pod Label supports "true" and "false" supports "true" and "false" Pod Annotation supports "false" only supports "true" and "false" 1.5.1.4. Istio Role Based Access Control features Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly. The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix. Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression. Upstream Istio community matching request headers example apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - "allowed.*" selector: matchLabels: app: httpbin 1.5.1.5. OpenSSL Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system. 1.5.1.6. External workloads Red Hat OpenShift Service Mesh does not support external workloads, such as virtual machines running outside OpenShift on bare metal servers. 1.5.1.7. Virtual Machine Support You can deploy virtual machines to OpenShift using OpenShift Virtualization. Then, you can apply a mesh policy, such as mTLS or AuthorizationPolicy, to these virtual machines, just like any other pod that is part of a mesh. 1.5.1.8. Component modifications A maistra-version label has been added to all resources. All Ingress resources have been converted to OpenShift Route resources. Grafana, distributed tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes. Godebug has been removed from all templates The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole. 1.5.1.9. Envoy filters Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. Due to tight coupling with the underlying Envoy APIs, backward compatibility cannot be maintained. EnvoyFilter patches are very sensitive to the format of the Envoy configuration that is generated by Istio. If the configuration generated by Istio changes, it has the potential to break the application of the EnvoyFilter . 1.5.1.10. Envoy services Red Hat OpenShift Service Mesh does not support QUIC-based services. 1.5.1.11. Istio Container Network Interface (CNI) plugin Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container network configuration eliminating the need to grant service accounts and projects access to security context constraints (SCCs) with elevated privileges. 1.5.1.12. Global mTLS settings Red Hat OpenShift Service Mesh creates a PeerAuthentication resource that enables or disables Mutual TLS authentication (mTLS) within the mesh. 1.5.1.13. Gateways Red Hat OpenShift Service Mesh installs ingress and egress gateways by default. You can disable gateway installation in the ServiceMeshControlPlane (SMCP) resource by using the following settings: spec.gateways.enabled=false to disable both ingress and egress gateways. spec.gateways.ingress.enabled=false to disable ingress gateways. spec.gateways.egress.enabled=false to disable egress gateways. Note The Operator annotates the default gateways to indicate that they are generated by and managed by the Red Hat OpenShift Service Mesh Operator. 1.5.1.14. Multicluster configurations Red Hat OpenShift Service Mesh support for multicluster configurations is limited to the federation of service meshes across multiple clusters. 1.5.1.15. Custom Certificate Signing Requests (CSR) You cannot configure Red Hat OpenShift Service Mesh to process CSRs through the Kubernetes certificate authority (CA). 1.5.1.16. Routes for Istio Gateways OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation. 1.5.1.16.1. Catch-all domains Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix> . See the OpenShift Container Platform documentation for more information about how default hostnames work and how a cluster-admin can customize it. If you use Red Hat OpenShift Dedicated, refer to the Red Hat OpenShift Dedicated the dedicated-admin role. 1.5.1.16.2. Subdomains Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it. 1.5.1.16.3. Transport layer security Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS. Additional resources Automatic route creation 1.5.2. Multitenant installations Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle. Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances. 1.5.2.1. Multitenancy versus cluster-wide installations The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding . Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation. Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details. If the OpenShift Container Platform cluster is configured to use the SDN plugin: NetworkPolicy : Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project. Note This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. Multitenant : Red Hat OpenShift Service Mesh joins the NetNamespace for each member project to the NetNamespace of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project ). If you remove a member from the Service Mesh, its NetNamespace is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project ). Subnet : No additional configuration is performed. 1.5.2.2. Cluster scoped resources Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig . These are not compatible with a multitenant cluster and have been replaced as described below. ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane. ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane. 1.5.3. Kiali and service mesh Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Kiali has been enabled by default. Ingress has been enabled by default. Updates have been made to the Kiali ConfigMap. Updates have been made to the ClusterRole settings for Kiali. Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges. 1.5.4. Distributed tracing and service mesh Installing the distributed tracing platform with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Distributed tracing has been enabled by default for Service Mesh. Ingress has been enabled by default for Service Mesh. The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http ) Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option. The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform Operator and is already protected by OAuth. Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod's ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector. 1.6. Preparing to install Service Mesh Before you can install Red Hat OpenShift Service Mesh, you must subscribe to OpenShift Container Platform and install OpenShift Container Platform in a supported configuration. 1.6.1. Prerequisites Maintain an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.9 overview . Install OpenShift Container Platform 4.9. If you are installing Red Hat OpenShift Service Mesh on a restricted network , follow the instructions for your chosen OpenShift Container Platform infrastructure. Install OpenShift Container Platform 4.9 on AWS Install OpenShift Container Platform 4.9 on user-provisioned AWS Install OpenShift Container Platform 4.9 on bare metal Install OpenShift Container Platform 4.9 on vSphere Install OpenShift Container Platform 4.9 on IBM Z and LinuxONE Install OpenShift Container Platform 4.9 on IBM Power Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. If you are using OpenShift Container Platform 4.9, see About the OpenShift CLI . For additional information about Red Hat OpenShift Service Mesh lifecycle and supported platforms, refer to the Support Policy . 1.6.2. Supported configurations The following configurations are supported for the current release of Red Hat OpenShift Service Mesh. 1.6.2.1. Supported platforms The Red Hat OpenShift Service Mesh Operator supports multiple versions of the ServiceMeshControlPlane resource. Version 2.3 Service Mesh control planes are supported on the following platform versions: Red Hat OpenShift Container Platform version 4.9 or later. Red Hat OpenShift Dedicated version 4. Azure Red Hat OpenShift (ARO) version 4. Red Hat OpenShift Service on AWS (ROSA). 1.6.2.2. Unsupported configurations Explicitly unsupported cases include: OpenShift Online is not supported for Red Hat OpenShift Service Mesh. Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running. 1.6.2.3. Supported network configurations Red Hat OpenShift Service Mesh supports the following network configurations. OpenShift-SDN OVN-Kubernetes is supported on OpenShift Container Platform 4.7.32+, OpenShift Container Platform 4.8.12+, and OpenShift Container Platform 4.9+. Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information. 1.6.2.4. Supported configurations for Service Mesh This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z, and IBM Power Systems. IBM Z is only supported on OpenShift Container Platform 4.6 and later. IBM Power Systems is only supported on OpenShift Container Platform 4.6 and later. Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster. Configurations that do not integrate external services such as virtual machines. Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. 1.6.2.5. Supported configurations for Kiali The Kiali console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 1.6.2.6. Supported configurations for Distributed Tracing Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated. 1.6.2.7. Supported WebAssembly module 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules. 1.6.3. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 1.7. Installing the Operators To install Red Hat OpenShift Service Mesh, first install the required Operators on OpenShift Container Platform and then create a ServiceMeshControlPlane resource to deploy the control plane. Note This basic installation is configured based on the default OpenShift settings and is not designed for production use. Use this default installation to verify your installation, and then configure your service mesh for your specific environment. Prerequisites Read the Preparing to install Red Hat OpenShift Service Mesh process. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. The following steps show how to install a basic instance of Red Hat OpenShift Service Mesh on OpenShift Container Platform. 1.7.1. Operator overview Red Hat OpenShift Service Mesh requires the following four Operators: OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform. It is based on the open source Elasticsearch project. Red Hat OpenShift distributed tracing platform - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. Kiali - Provides observability for your service mesh. Allows you to view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. Warning Do not install Community versions of the Operators. Community Operators are not supported. 1.7.2. Installing the Operators To install Red Hat OpenShift Service Mesh, install following Operators in this order. Repeat the procedure for each Operator. OpenShift Elasticsearch Red Hat OpenShift distributed tracing platform Kiali Red Hat OpenShift Service Mesh Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator will create the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. In the OpenShift Container Platform web console, click Operators OperatorHub . Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported. Click Install . On the Install Operator page for each Operator, accept the default settings. Click Install . Wait until the Operator has installed before repeating the steps for the Operator in the list. The OpenShift Elasticsearch Operator is installed in the openshift-operators-redhat namespace and is available for all namespaces in the cluster. The Red Hat OpenShift distributed tracing platform is installed in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster. The Kiali and Red Hat OpenShift Service Mesh Operators are installed in the openshift-operators namespace and are available for all namespaces in the cluster. After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators installed. 1.7.3. steps The Red Hat OpenShift Service Mesh Operator does not create the various Service Mesh custom resource definitions (CRDs) until you deploy a Service Mesh control plane. You use the ServiceMeshControlPlane resource to install and configure the Service Mesh components. For more information, see Creating the ServiceMeshControlPlane . 1.8. Creating the ServiceMeshControlPlane You can deploy a basic installation of the ServiceMeshControlPlane (SMCP) by using either the OpenShift Container Platform web console or from the command line using the oc client tool. Note This basic installation is configured based on the default OpenShift settings and is not designed for production use. Use this default installation to verify your installation, and then configure your ServiceMeshControlPlane for your environment. Note Red Hat OpenShift Service on AWS (ROSA) places additional restrictions on where you can create resources and as a result the default deployment does not work. See Installing Service Mesh on Red Hat OpenShift Service on AWS for additional requirements before deploying your SMCP in a ROSA environment. Note The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project. 1.8.1. Deploying the Service Mesh control plane from the web console You can deploy a basic ServiceMeshControlPlane by using the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. An account with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Create a project named istio-system . Navigate to Home Projects . Click Create Project . In the Name field, enter istio-system . The ServiceMeshControlPlane resource must be installed in a project that is separate from your microservices and Operators. These steps use istio-system as an example, but you can deploy your Service Mesh control plane in any project as long as it is separate from the project that contains your services. Click Create . Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator, then click Istio Service Mesh Control Plane . On the Istio Service Mesh Control Plane tab, click Create ServiceMeshControlPlane . On the Create ServiceMeshControlPlane page, accept the default Service Mesh control plane version to take advantage of the features available in the most current version of the product. The version of the control plane determines the features available regardless of the version of the Operator. You can configure ServiceMeshControlPlane settings later. For more information, see Configuring Red Hat OpenShift Service Mesh. Click Create . The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. To verify the control plane installed correctly, click the Istio Service Mesh Control Plane tab. Click the name of the new control plane. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured. 1.8.2. Deploying the Service Mesh control plane using the CLI You can deploy a basic ServiceMeshControlPlane from the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the following example. The version of the Service Mesh control plane determines the features available regardless of the version of the Operator. Example version 2.3 istio-installation.yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 tracing: type: Jaeger sampling: 10000 addons: jaeger: name: jaeger install: storage: type: Memory kiali: enabled: true name: kiali grafana: enabled: true Run the following command to deploy the Service Mesh control plane, where <istio_installation.yaml> includes the full path to your file. USD oc create -n istio-system -f <istio_installation.yaml> To watch the progress of the pod deployment, run the following command: USD oc get pods -n istio-system -w You should see output similar to the following: NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m 1.8.3. Validating your SMCP installation with the CLI You can validate the creation of the ServiceMeshControlPlane from the command line. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login https://<HOSTNAME>:6443 Run the following command to verify the Service Mesh control plane installation, where istio-system is the namespace where you installed the Service Mesh control plane. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady ["default"] 2.1.1 66m 1.8.4. Validating your SMCP installation with Kiali You can use the Kiali console to validate your Service Mesh installation. The Kiali console offers several ways to validate your Service Mesh components are deployed and configured properly. Procedure Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the route for the Kiali console. Click the route Location to launch the console. Click Log In With OpenShift . When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. When there are multiple namespaces shown on the Overview page, Kiali shows namespaces with health or validation problems first. Figure 1.1. Kiali Overview page The tile for each namespace displays the number of labels, the Istio Config health, the number of and Applications health, and Traffic for the namespace. If you are validating the console installation and namespaces have not yet been added to the mesh, there might not be any data to display other than istio-system . Kiali has four dashboards specifically for the namespace where the Service Mesh control plane is installed. To view these dashboards, click the Options menu on the tile for the control plane namespace, for example, istio-system , and select one of the following options: Istio Mesh Dashboard Istio Control Plane Dashboard Istio Performance Dashboard Istio Wasm Exetension Dashboard Figure 1.2. Grafana Istio Control Plane Dashboard Kiali also installs two additional Grafana dashboards, available from the Grafana Home page: Istio Workload Dashboard Istio Service Dashboard To view the Service Mesh control plane nodes, click the Graph page, select the Namespace where you installed the ServiceMeshControlPlane from the menu, for example istio-system . If necessary, click Display idle nodes . To learn more about the Graph page, click the Graph tour link. To view the mesh topology, select one or more additional namespaces from the Service Mesh Member Roll from the Namespace menu. To view the list of applications in the istio-system namespace, click the Applications page. Kiali displays the health of the applications. Hover your mouse over the information icon to view any additional information noted in the Details column. To view the list of workloads in the istio-system namespace, click the Workloads page. Kiali displays the health of the workloads. Hover your mouse over the information icon to view any additional information noted in the Details column. To view the list of services in the istio-system namespace, click the Services page. Kiali displays the health of the services and of the configurations. Hover your mouse over the information icon to view any additional information noted in the Details column. To view a list of the Istio Configuration objects in the istio-system namespace, click the Istio Config page. Kiali displays the health of the configuration. If there are configuration errors, click the row and Kiali opens the configuration file with the error highlighted. 1.8.5. Installing on Red Hat OpenShift Service on AWS (ROSA) Starting with version 2.2, Red Hat OpenShift Service Mesh supports installation on Red Hat OpenShift Service on AWS (ROSA). This section documents the additional requirements when installing Service Mesh on this platform. 1.8.5.1. Installation location You must create a new namespace, for example istio-system , when installing Red Hat OpenShift Service Mesh and creating the ServiceMeshControlPlane . 1.8.5.2. Required Service Mesh control plane configuration The default configuration in the ServiceMeshControlPlane file does not work on a ROSA cluster. You must modify the default SMCP and set spec.security.identity.type=ThirdParty when installing on Red Hat OpenShift Service on AWS. Example ServiceMeshControlPlane resource for ROSA apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 security: identity: type: ThirdParty #required setting for ROSA tracing: type: Jaeger sampling: 10000 policy: type: Istiod addons: grafana: enabled: true jaeger: install: storage: type: Memory kiali: enabled: true prometheus: enabled: true telemetry: type: Istiod 1.8.5.3. Restrictions on Kiali configuration Red Hat OpenShift Service on AWS places additional restrictions on where you can create resources and does not let you create the Kiali resource in a Red Hat managed namespace. This means that the following common settings for spec.deployment.accessible_namespaces are not allowed in a ROSA cluster: ['**'] (all namespaces) default codeready-* openshift-* redhat-* The validation error message provides a complete list of all the restricted namespaces. Example Kiali resource for ROSA apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: auth: strategy: openshift deployment: accessible_namespaces: #restricted setting for ROSA - istio-system image_pull_policy: '' ingress_enabled: true namespace: istio-system 1.8.6. Additional resources Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane profiles. For more information, see Creating control plane profiles . 1.8.7. steps Create a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh. For more information, see Adding services to a service mesh . 1.9. Adding services to a service mesh After installing the Operators and ServiceMeshControlPlane resource, add applications, workloads, or services to your mesh by creating a ServiceMeshMemberRoll resource and specifying the namespaces where your content is located. If you already have an application, workload, or service to add to a ServiceMeshMemberRoll resource, use the following steps. Or, to install a sample application called Bookinfo and add it to a ServiceMeshMemberRoll resource, skip to the tutorial for installing the Bookinfo example application to see how an application works in Red Hat OpenShift Service Mesh. The items listed in the ServiceMeshMemberRoll resource are the applications and workflows that are managed by the ServiceMeshControlPlane resource. The control plane, which includes the Service Mesh Operators, Istiod, and ServiceMeshControlPlane , and the data plane, which includes applications and Envoy proxy, must be in separate namespaces. Note After you add the namespace to the ServiceMeshMemberRoll , access to services or pods in that namespace will not be accessible to callers outside the service mesh. 1.9.1. Creating the Red Hat OpenShift Service Mesh member roll The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment. You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane , for example istio-system . 1.9.1.1. Creating the member roll from the web console You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of existing projects to add to the service mesh. Procedure Log in to the OpenShift Container Platform web console. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. Navigate to Home Projects . Enter a name in the Name field. Click Create . Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMemberRoll Click Members , then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Create . 1.9.1.2. Creating the member roll from the CLI You can add a project to the ServiceMeshMemberRoll from the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of projects to add to the service mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. USD oc new-project <your-project> To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system default The installation has finished successfully when the STATUS column is Configured . 1.9.2. Adding or removing projects from the service mesh You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. 1.9.2.1. Adding or removing projects from the member roll using the web console Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click the default link. Click the YAML tab. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Save . Click Reload . 1.9.2.2. Adding or removing projects from the member roll using the CLI You can modify an existing Service Mesh member roll using the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name 1.9.3. Bookinfo example application The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.3.2 installation on OpenShift Container Platform. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews. The Bookinfo application consists of these microservices: The productpage microservice calls the details and reviews microservices to populate the page. The details microservice contains book information. The reviews microservice contains book reviews. It also calls the ratings microservice. The ratings microservice contains book ranking information that accompanies a book review. There are three versions of the reviews microservice: Version v1 does not call the ratings Service. Version v2 calls the ratings Service and displays each rating as one to five black stars. Version v3 calls the ratings Service and displays each rating as one to five red stars. 1.9.3.1. Installing the Bookinfo application This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh. Prerequisites: OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.3.2 installed. Access to the OpenShift CLI ( oc ). An account with the cluster-admin role. Note The Bookinfo sample application cannot be installed on IBM Z and IBM Power Systems. Note The commands in this section assume the Service Mesh control plane project is istio-system . If you installed the control plane in another namespace, edit each command before you run it. Procedure Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Click Home Projects . Click Create Project . Enter bookinfo as the Project Name , enter a Display Name , and enter a Description , then click Create . Alternatively, you can run this command from the CLI to create the bookinfo project. USD oc new-project bookinfo Click Operators Installed Operators . Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor. If you have not created a ServiceMeshMemberRoll , click Create ServiceMeshMemberRoll . Click Members , then enter the name of your project in the Value field. Click Create to save the updated Service Mesh Member Roll. Or, save the following example to a YAML file. Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system -o wide The installation has finished successfully when the STATUS column is Configured . NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s ["bookinfo"] From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/platform/kube/bookinfo.yaml You should see output similar to the following: service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created Create the ingress gateway by applying the bookinfo-gateway.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/bookinfo-gateway.yaml You should see output similar to the following: gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created Set the value for the GATEWAY_URL parameter: USD export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') 1.9.3.2. Adding default destination rules Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication. Procedure To add destination rules, run one of the following commands: If you did not enable mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all.yaml If you enabled mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all-mtls.yaml You should see output similar to the following: destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created 1.9.3.3. Verifying the Bookinfo installation To confirm that the sample Bookinfo application was successfully deployed, perform the following steps. Prerequisites Red Hat OpenShift Service Mesh installed. Complete the steps for installing the Bookinfo sample app. Procedure from CLI Log in to the OpenShift Container Platform CLI. Verify that all pods are ready with this command: USD oc get pods -n bookinfo All pods should have a status of Running . You should see output similar to the following: NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m Run the following command to retrieve the URL for the product page: echo "http://USDGATEWAY_URL/productpage" Copy and paste the output in a web browser to verify the Bookinfo product page is deployed. Procedure from Kiali web console Obtain the address for the Kiali web console. Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. Click the link in the Location column for Kiali. Click Log In With OpenShift . The Kiali Overview screen presents tiles for each project namespace. In Kiali, click Graph . Select bookinfo from the Namespace list, and App graph from the Graph Type list. Click Display idle nodes from the Display menu. This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported. Use the Duration menu to increase the time period to help ensure older traffic is captured. Use the Refresh Rate menu to refresh traffic more or less often, or not at all. Click Services , Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy. 1.9.3.4. Removing the Bookinfo application Follow these steps to remove the Bookinfo application. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.3.2 installed. Access to the OpenShift CLI ( oc ). 1.9.3.4.1. Delete the Bookinfo project Procedure Log in to the OpenShift Container Platform web console. Click to Home Projects . Click the bookinfo menu , and then click Delete Project . Type bookinfo in the confirmation dialog box, and then click Delete . Alternatively, you can run this command using the CLI to create the bookinfo project. USD oc delete project bookinfo 1.9.3.4.2. Remove the Bookinfo project from the Service Mesh member roll Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click the Project menu and choose istio-system from the list. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator. Click the ServiceMeshMemberRoll menu and select Edit Service Mesh Member Roll . Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list. Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll . In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]' Click Save to update Service Mesh Member Roll. 1.9.4. steps To continue the installation process, you must enable sidecar injection . 1.10. Enabling sidecar injection After adding the namespaces that contain your services to your mesh, the step is to enable automatic sidecar injection in the Deployment resource for your application. You must enable automatic sidecar injection for each deployment. If you have installed the Bookinfo sample application, the application was deployed and the sidecars were injected as part of the installation procedure. If you are using your own project and service, deploy your applications on OpenShift Container Platform. For more information, see the OpenShift Container Platform documentation, Understanding Deployment and DeploymentConfig objects . 1.10.1. Prerequisites Services deployed to the mesh , for example the Bookinfo sample application. A Deployment resource file. 1.10.2. Enabling automatic sidecar injection When deploying an application, you must opt-in to injection by configuring the annotation sidecar.istio.io/inject in spec.template.metadata.annotations to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem. Prerequisites Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection. Procedure To find your deployments use the oc get command. USD oc get deployment -n <namespace> For example, to view the deployment file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format. oc get deployment -n bookinfo ratings-v1 -o yaml Open the application's deployment configuration YAML file in an editor. Add spec.template.metadata.annotations.sidecar.istio/inject to your Deployment YAML and set sidecar.istio.io/inject to true as shown in the following example. Example snippet from bookinfo deployment-ratings-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true' Save the Deployment configuration file. Add the file back to the project that contains your app. USD oc apply -n <namespace> -f deployment.yaml In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited. USD oc apply -n bookinfo -f deployment-ratings-v1.yaml To verify that the resource uploaded successfully, run the following command. USD oc get deployment -n <namespace> <deploymentName> -o yaml For example, USD oc get deployment -n bookinfo ratings-v1 -o yaml 1.10.3. Validating sidecar injection The Kiali console offers several ways to validate whether or not your applications, services, and workloads have a sidecar proxy. Figure 1.3. Missing sidecar badge The Graph page displays a node badge indicating a Missing Sidecar on the following graphs: App graph Versioned app graph Workload graph Figure 1.4. Missing sidecar icon The Applications page displays a Missing Sidecar icon in the Details column for any applications in a namespace that do not have a sidecar. The Workloads page displays a Missing Sidecar icon in the Details column for any applications in a namespace that do not have a sidecar. The Services page displays a Missing Sidecar icon in the Details column for any applications in a namespace that do not have a sidecar. When there are multiple versions of a service, you use the Service Details page to view Missing Sidecar icons. The Workload Details page has a special unified Logs tab that lets you view and correlate application and proxy logs. You can view the Envoy logs as another way to validate sidecar injection for your application workloads. The Workload Details page also has an Envoy tab for any workload that is an Envoy proxy or has been injected with an Envoy proxy. This tab displays a built-in Envoy dashboard that includes subtabs for Clusters , Listeners , Routes , Bootstrap , Config , and Metrics . For information about enabling Envoy access logs, see the Troubleshooting section. For information about viewing Envoy logs, see Viewing logs in the Kiali console 1.10.4. Setting proxy environment variables through annotations Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane . You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar. Example injection-template.yaml apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }" Warning You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/ . Resources that include these labels or annotations will be overwritten or deleted by the Operator during the reconciliation. 1.10.5. Updating sidecar proxies In order to update the configuration for sidecar proxies the application administrator must restart the application pods. If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods: USD oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}' If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods. 1.10.6. steps Configure Red Hat OpenShift Service Mesh features for your environment. Security Traffic management Metrics, logs, and traces 1.11. Upgrading Service Mesh To access the most current features of Red Hat OpenShift Service Mesh, upgrade to the current version, 2.3.2. 1.11.1. Understanding versioning Red Hat uses semantic versioning for product releases. Semantic Versioning is a 3-component number in the format of X.Y.Z, where: X stands for a Major version. Major releases usually denote some sort of breaking change: architectural changes, API changes, schema changes, and similar major updates. Y stands for a Minor version. Minor releases contain new features and functionality while maintaining backwards compatibility. Z stands for a Patch version (also known as a z-stream release). Patch releases are used to addresses Common Vulnerabilities and Exposures (CVEs) and release bug fixes. New features and functionality are generally not released as part of a Patch release. 1.11.1.1. How versioning affects Service Mesh upgrades Depending on the version of the update you are making, the upgrade process is different. Patch updates - Patch upgrades are managed by the Operator Lifecycle Manager (OLM); they happen automatically when you update your Operators. Minor upgrades - Minor upgrades require both updating to the most recent Red Hat OpenShift Service Mesh Operator version and manually modifying the spec.version value in your ServiceMeshControlPlane resources. Major upgrades - Major upgrades require both updating to the most recent Red Hat OpenShift Service Mesh Operator version and manually modifying the spec.version value in your ServiceMeshControlPlane resources. Because major upgrades can contain changes that are not backwards compatible, additional manual changes might be required. 1.11.1.2. Understanding Service Mesh versions In order to understand what version of Red Hat OpenShift Service Mesh you have deployed on your system, you need to understand how each of the component versions is managed. Operator version - The most current Operator version is 2.3.2. The Operator version number only indicates the version of the currently installed Operator. Because the Red Hat OpenShift Service Mesh Operator supports multiple versions of the Service Mesh control plane, the version of the Operator does not determine the version of your deployed ServiceMeshControlPlane resources. Important Upgrading to the latest Operator version automatically applies patch updates, but does not automatically upgrade your Service Mesh control plane to the latest minor version. ServiceMeshControlPlane version - The ServiceMeshControlPlane version determines what version of Red Hat OpenShift Service Mesh you are using. The value of the spec.version field in the ServiceMeshControlPlane resource controls the architecture and configuration settings that are used to install and deploy Red Hat OpenShift Service Mesh. When you create the Service Mesh control plane you can set the version in one of two ways: To configure in the Form View, select the version from the Control Plane Version menu. To configure in the YAML View, set the value for spec.version in the YAML file. Operator Lifecycle Manager (OLM) does not manage Service Mesh control plane upgrades, so the version number for your Operator and ServiceMeshControlPlane (SMCP) may not match, unless you have manually upgraded your SMCP. 1.11.2. Upgrade considerations The maistra.io/ label or annotation should not be used on a user-created custom resource, because it indicates that the resource was generated by and should be managed by the Red Hat OpenShift Service Mesh Operator. Warning During the upgrade, the Operator makes changes, including deleting or replacing files, to resources that include the following labels or annotations that indicate that the resource is managed by the Operator. Before upgrading check for user-created custom resources that include the following labels or annotations: maistra.io/ AND the app.kubernetes.io/managed-by label set to maistra-istio-operator (Red Hat OpenShift Service Mesh) kiali.io/ (Kiali) jaegertracing.io/ (Red Hat OpenShift distributed tracing platform) logging.openshift.io/ (Red Hat Elasticsearch) Before upgrading, check your user-created custom resources for labels or annotations that indicate they are Operator managed. Remove the label or annotation from custom resources that you do not want to be managed by the Operator. When upgrading to version 2.0, the Operator only deletes resources with these labels in the same namespace as the SMCP. When upgrading to version 2.1, the Operator deletes resources with these labels in all namespaces. 1.11.2.1. Known issues that may affect upgrade Known issues that may affect your upgrade include: Red Hat OpenShift Service Mesh does not support the use of EnvoyFilter configuration except where explicitly documented. This is due to tight coupling with the underlying Envoy APIs, meaning that backward compatibility cannot be maintained. If you are using Envoy Filters, and the configuration generated by Istio has changed due to the lastest version of Envoy introduced by upgrading your ServiceMeshControlPlane , that has the potential to break any EnvoyFilter you may have implemented. OSSM-1505 ServiceMeshExtension does not work with OpenShift Container Platform version 4.11. Because ServiceMeshExtension has been deprecated in Red Hat OpenShift Service Mesh 2.2, this known issue will not be fixed and you must migrate your extensions to WasmPluging OSSM-1396 If a gateway resource contains the spec.externalIPs setting, rather than being recreated when the ServiceMeshControlPlane is updated, the gateway is removed and never recreated. OSSM-1052 When configuring a Service ExternalIP for the ingressgateway in the Service Mesh control plane, the service is not created. The schema for the SMCP is missing the parameter for the service. Workaround: Disable the gateway creation in the SMCP spec and manage the gateway deployment entirely manually (including Service, Role and RoleBinding). 1.11.3. Upgrading the Operators In order to keep your Service Mesh patched with the latest security fixes, bug fixes, and software updates, you must keep your Operators updated. You initiate patch updates by upgrading your Operators. Important The version of the Operator does not determine the version of your service mesh. The version of your deployed Service Mesh control plane determines your version of Service Mesh. Because the Red Hat OpenShift Service Mesh Operator supports multiple versions of the Service Mesh control plane, updating the Red Hat OpenShift Service Mesh Operator does not update the spec.version value of your deployed ServiceMeshControlPlane . Also note that the spec.version value is a two digit number, for example 2.2, and that patch updates, for example 2.2.1, are not reflected in the SMCP version value. Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in OpenShift Container Platform. OLM queries for available Operators as well as upgrades for installed Operators. Whether or not you have to take action to upgrade your Operators depends on the settings you selected when installing them. When you installed each of your Operators, you selected an Update Channel and an Approval Strategy . The combination of these two settings determine when and how your Operators are updated. Table 1.4. Interaction of Update Channel and Approval Strategy Versioned channel "Stable" or "Preview" Channel Automatic Automatically updates the Operator for minor and patch releases for that version only. Will not automatically update to the major version (that is, from version 2.0 to 3.0). Manual change to Operator subscription required to update to the major version. Automatically updates Operator for all major, minor, and patch releases. Manual Manual updates required for minor and patch releases for the specified version. Manual change to Operator subscription required to update to the major version. Manual updates required for all major, minor, and patch releases. When you update your Red Hat OpenShift Service Mesh Operator the Operator Lifecycle Manager (OLM) removes the old Operator pod and starts a new pod. Once the new Operator pod starts, the reconciliation process checks the ServiceMeshControlPlane (SMCP), and if there are updated container images available for any of the Service Mesh control plane components, it replaces those Service Mesh control plane pods with ones that use the new container images. When you upgrade the Kiali and Red Hat OpenShift distributed tracing platform Operators, the OLM reconciliation process scans the cluster and upgrades the managed instances to the version of the new Operator. For example, if you update the Red Hat OpenShift distributed tracing platform Operator from version 1.30.2 to version 1.34.1, the Operator scans for running instances of distributed tracing platform and upgrades them to 1.34.1 as well. To stay on a particular patch version of Red Hat OpenShift Service Mesh, you would need to disable automatic updates and remain on that specific version of the Operator. For more information about upgrading Operators, refer to the Operator Lifecycle Manager documentation. 1.11.4. Upgrading the control plane You must manually update the control plane for minor and major releases. The community Istio project recommends canary upgrades, Red Hat OpenShift Service Mesh only supports in-place upgrades. Red Hat OpenShift Service Mesh requires that you upgrade from each minor release to the minor release in sequence. For example, you must upgrade from version 2.0 to version 2.1, and then upgrade to version 2.2. You cannot update from Red Hat OpenShift Service Mesh 2.0 to 2.2 directly. When you upgrade the service mesh control plane, all Operator managed resources, for example gateways, are also upgraded. Although you can deploy multiple versions of the control plane in the same cluster, Red Hat OpenShift Service Mesh does not support canary upgrades of the service mesh. That is, you can have different SCMP resources with different values for spec.version , but they cannot be managing the same mesh. For more information about migrating your extensions, refer to Migrating from ServiceMeshExtension to WasmPlugin resources . 1.11.4.1. Upgrade changes from version 2.2 to version 2.3 Upgrading the Service Mesh control plane from version 2.2 to 2.3 introduces the following behavioral changes: This release requires use of the WasmPlugin API. Support for the ServiceMeshExtension API, which was deprecated in 2.2, has now been removed. If you attempt to upgrade while using the ServiceMeshExtension API, then the upgrade fails. 1.11.4.2. Upgrade changes from version 2.1 to version 2.2 Upgrading the Service Mesh control plane from version 2.1 to 2.2 introduces the following behavioral changes: The istio-node DaemonSet is renamed to istio-cni-node to match the name in upstream Istio. Istio 1.10 updated Envoy to send traffic to the application container using eth0 rather than lo by default. This release adds support for the WasmPlugin API and deprecates the ServiceMeshExtension API. 1.11.4.3. Upgrade changes from version 2.0 to version 2.1 Upgrading the Service Mesh control plane from version 2.0 to 2.1 introduces the following architectural and behavioral changes. Architecture changes Mixer has been completely removed in Red Hat OpenShift Service Mesh 2.1. Upgrading from a Red Hat OpenShift Service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled. If you see the following message when upgrading from v2.0 to v2.1, update the existing Mixer type to Istiod type in the existing Control Plane spec before you update the .spec.version field: An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type "Mixer" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type "Mixer" and telemetry.Mixer options have been removed in v2.1, please use another alternative]" For example: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.3 Behavioral changes AuthorizationPolicy updates: With the PROXY protocol, if you're using ipBlocks and notIpBlocks to specify remote IP addresses, update the configuration to use remoteIpBlocks and notRemoteIpBlocks instead. Added support for nested JSON Web Token (JWT) claims. EnvoyFilter breaking changes> Must use typed_config xDS v2 is no longer supported Deprecated filter names Older versions of proxies may report 503 status codes when receiving 1xx or 204 status codes from newer proxies. 1.11.4.4. Upgrading the Service Mesh control plane To upgrade Red Hat OpenShift Service Mesh, you must update the version field of the Red Hat OpenShift Service Mesh ServiceMeshControlPlane v2 resource. Then, once it is configured and applied, restart the application pods to update each sidecar proxy and its configuration. Prerequisites You are running OpenShift Container Platform 4.9 or later. You have the latest Red Hat OpenShift Service Mesh Operator. Procedure Switch to the project that contains your ServiceMeshControlPlane resource. In this example, istio-system is the name of the Service Mesh control plane project. USD oc project istio-system Check your v2 ServiceMeshControlPlane resource configuration to verify it is valid. Run the following command to view your ServiceMeshControlPlane resource as a v2 resource. USD oc get smcp -o yaml Tip Back up your Service Mesh control plane configuration. Update the .spec.version field and apply the configuration. For example: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 Alternatively, instead of using the command line, you can use the web console to edit the Service Mesh control plane. In the OpenShift Container Platform web console, click Project and select the project name you just entered. Click Operators Installed Operators . Find your ServiceMeshControlPlane instance. Select YAML view and update text of the YAML file, as shown in the example. Click Save . 1.11.4.5. Migrating Red Hat OpenShift Service Mesh from version 1.1 to version 2.0 Upgrading from version 1.1 to 2.0 requires manual steps that migrate your workloads and application to a new instance of Red Hat OpenShift Service Mesh running the new version. Prerequisites You must upgrade to OpenShift Container Platform 4.7. before you upgrade to Red Hat OpenShift Service Mesh 2.0. You must have Red Hat OpenShift Service Mesh version 2.0 operator. If you selected the automatic upgrade path, the operator automatically downloads the latest information. However, there are steps you must take to use the features in Red Hat OpenShift Service Mesh version 2.0. 1.11.4.5.1. Upgrading Red Hat OpenShift Service Mesh To upgrade Red Hat OpenShift Service Mesh, you must create an instance of Red Hat OpenShift Service Mesh ServiceMeshControlPlane v2 resource in a new namespace. Then, once it's configured, move your microservice applications and workloads from your old mesh to the new service mesh. Procedure Check your v1 ServiceMeshControlPlane resource configuration to make sure it is valid. Run the following command to view your ServiceMeshControlPlane resource as a v2 resource. USD oc get smcp -o yaml Check the spec.techPreview.errored.message field in the output for information about any invalid fields. If there are invalid fields in your v1 resource, the resource is not reconciled and cannot be edited as a v2 resource. All updates to v2 fields will be overridden by the original v1 settings. To fix the invalid fields, you can replace, patch, or edit the v1 version of the resource. You can also delete the resource without fixing it. After the resource has been fixed, it can be reconciled, and you can to modify or view the v2 version of the resource. To fix the resource by editing a file, use oc get to retrieve the resource, edit the text file locally, and replace the resource with the file you edited. USD oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. USD oc replace -f smcp-resource.yaml To fix the resource using patching, use oc patch . USD oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{"op": "replace","path":"/spec/path/to/bad/setting","value":"corrected-value"}]' To fix the resource by editing with command line tools, use oc edit . USD oc edit smcp.v1.maistra.io <smcp_name> Back up your Service Mesh control plane configuration. Switch to the project that contains your ServiceMeshControlPlane resource. In this example, istio-system is the name of the Service Mesh control plane project. USD oc project istio-system Enter the following command to retrieve the current configuration. Your <smcp_name> is specified in the metadata of your ServiceMeshControlPlane resource, for example basic-install or full-install . USD oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml Convert your ServiceMeshControlPlane to a v2 control plane version that contains information about your configuration as a starting point. USD oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml Create a project. In the OpenShift Container Platform console Project menu, click New Project and enter a name for your project, istio-system-upgrade , for example. Or, you can run this command from the CLI. USD oc new-project istio-system-upgrade Update the metadata.namespace field in your v2 ServiceMeshControlPlane with your new project name. In this example, use istio-system-upgrade . Update the version field from 1.1 to 2.0 or remove it in your v2 ServiceMeshControlPlane . Create a ServiceMeshControlPlane in the new namespace. On the command line, run the following command to deploy the control plane with the v2 version of the ServiceMeshControlPlane that you retrieved. In this example, replace `<smcp_name.v2> `with the path to your file. USD oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml Alternatively, you can use the console to create the Service Mesh control plane. In the OpenShift Container Platform web console, click Project . Then, select the project name you just entered. Click Operators Installed Operators . Click Create ServiceMeshControlPlane . Select YAML view and paste text of the YAML file you retrieved into the field. Check that the apiVersion field is set to maistra.io/v2 and modify the metadata.namespace field to use the new namespace, for example istio-system-upgrade . Click Create . 1.11.4.5.2. Configuring the 2.0 ServiceMeshControlPlane The ServiceMeshControlPlane resource has been changed for Red Hat OpenShift Service Mesh version 2.0. After you created a v2 version of the ServiceMeshControlPlane resource, modify it to take advantage of the new features and to fit your deployment. Consider the following changes to the specification and behavior of Red Hat OpenShift Service Mesh 2.0 as you're modifying your ServiceMeshControlPlane resource. You can also refer to the Red Hat OpenShift Service Mesh 2.0 product documentation for new information to features you use. The v2 resource must be used for Red Hat OpenShift Service Mesh 2.0 installations. 1.11.4.5.2.1. Architecture changes The architectural units used by versions have been replaced by Istiod. In 2.0 the Service Mesh control plane components Mixer, Pilot, Citadel, Galley, and the sidecar injector functionality have been combined into a single component, Istiod. Although Mixer is no longer supported as a control plane component, Mixer policy and telemetry plugins are now supported through WASM extensions in Istiod. Mixer can be enabled for policy and telemetry if you need to integrate legacy Mixer plugins. Secret Discovery Service (SDS) is used to distribute certificates and keys to sidecars directly from Istiod. In Red Hat OpenShift Service Mesh version 1.1, secrets were generated by Citadel, which were used by the proxies to retrieve their client certificates and keys. 1.11.4.5.2.2. Annotation changes The following annotations are no longer supported in v2.0. If you are using one of these annotations, you must update your workload before moving it to a v2.0 Service Mesh control plane. sidecar.maistra.io/proxyCPULimit has been replaced with sidecar.istio.io/proxyCPULimit . If you were using sidecar.maistra.io annotations on your workloads, you must modify those workloads to use sidecar.istio.io equivalents instead. sidecar.maistra.io/proxyMemoryLimit has been replaced with sidecar.istio.io/proxyMemoryLimit sidecar.istio.io/discoveryAddress is no longer supported. Also, the default discovery address has moved from pilot.<control_plane_namespace>.svc:15010 (or port 15011, if mtls is enabled) to istiod-<smcp_name>.<control_plane_namespace>.svc:15012 . The health status port is no longer configurable and is hard-coded to 15021. * If you were defining a custom status port, for example, status.sidecar.istio.io/port , you must remove the override before moving the workload to a v2.0 Service Mesh control plane. Readiness checks can still be disabled by setting the status port to 0 . Kubernetes Secret resources are no longer used to distribute client certificates for sidecars. Certificates are now distributed through Istiod's SDS service. If you were relying on mounted secrets, they are longer available for workloads in v2.0 Service Mesh control planes. 1.11.4.5.2.3. Behavioral changes Some features in Red Hat OpenShift Service Mesh 2.0 work differently than they did in versions. The readiness port on gateways has moved from 15020 to 15021 . The target host visibility includes VirtualService, as well as ServiceEntry resources. It includes any restrictions applied through Sidecar resources. Automatic mutual TLS is enabled by default. Proxy to proxy communication is automatically configured to use mTLS, regardless of global PeerAuthentication policies in place. Secure connections are always used when proxies communicate with the Service Mesh control plane regardless of spec.security.controlPlane.mtls setting. The spec.security.controlPlane.mtls setting is only used when configuring connections for Mixer telemetry or policy. 1.11.4.5.2.4. Migration details for unsupported resources Policy (authentication.istio.io/v1alpha1) Policy resources must be migrated to new resource types for use with v2.0 Service Mesh control planes, PeerAuthentication and RequestAuthentication. Depending on the specific configuration in your Policy resource, you may have to configure multiple resources to achieve the same effect. Mutual TLS Mutual TLS enforcement is accomplished using the security.istio.io/v1beta1 PeerAuthentication resource. The legacy spec.peers.mtls.mode field maps directly to the new resource's spec.mtls.mode field. Selection criteria has changed from specifying a service name in spec.targets[x].name to a label selector in spec.selector.matchLabels . In PeerAuthentication, the labels must match the selector on the services named in the targets list. Any port-specific settings will need to be mapped into spec.portLevelMtls . Authentication Additional authentication methods specified in spec.origins , must be mapped into a security.istio.io/v1beta1 RequestAuthentication resource. spec.selector.matchLabels must be configured similarly to the same field on PeerAuthentication. Configuration specific to JWT principals from spec.origins.jwt items map to similar fields in spec.rules items. spec.origins[x].jwt.triggerRules specified in the Policy must be mapped into one or more security.istio.io/v1beta1 AuthorizationPolicy resources. Any spec.selector.labels must be configured similarly to the same field on RequestAuthentication. spec.origins[x].jwt.triggerRules.excludedPaths must be mapped into an AuthorizationPolicy whose spec.action is set to ALLOW, with spec.rules[x].to.operation.path entries matching the excluded paths. spec.origins[x].jwt.triggerRules.includedPaths must be mapped into a separate AuthorizationPolicy whose spec.action is set to ALLOW , with spec.rules[x].to.operation.path entries matching the included paths, and spec.rules.[x].from.source.requestPrincipals entries that align with the specified spec.origins[x].jwt.issuer in the Policy resource. ServiceMeshPolicy (maistra.io/v1) ServiceMeshPolicy was configured automatically for the Service Mesh control plane through the spec.istio.global.mtls.enabled in the v1 resource or spec.security.dataPlane.mtls in the v2 resource setting. For v2 control planes, a functionally equivalent PeerAuthentication resource is created during installation. This feature is deprecated in Red Hat OpenShift Service Mesh version 2.0 RbacConfig, ServiceRole, ServiceRoleBinding (rbac.istio.io/v1alpha1) These resources were replaced by the security.istio.io/v1beta1 AuthorizationPolicy resource. Mimicking RbacConfig behavior requires writing a default AuthorizationPolicy whose settings depend on the spec.mode specified in the RbacConfig. When spec.mode is set to OFF , no resource is required as the default policy is ALLOW, unless an AuthorizationPolicy applies to the request. When spec.mode is set to ON, set spec: {} . You must create AuthorizationPolicy policies for all services in the mesh. spec.mode is set to ON_WITH_INCLUSION , must create an AuthorizationPolicy with spec: {} in each included namespace. Inclusion of individual services is not supported by AuthorizationPolicy. However, as soon as any AuthorizationPolicy is created that applies to the workloads for the service, all other requests not explicitly allowed will be denied. When spec.mode is set to ON_WITH_EXCLUSION , it is not supported by AuthorizationPolicy. A global DENY policy can be created, but an AuthorizationPolicy must be created for every workload in the mesh because there is no allow-all policy that can be applied to either a namespace or a workload. AuthorizationPolicy includes configuration for both the selector to which the configuration applies, which is similar to the function ServiceRoleBinding provides and the rules which should be applied, which is similar to the function ServiceRole provides. ServiceMeshRbacConfig (maistra.io/v1) This resource is replaced by using a security.istio.io/v1beta1 AuthorizationPolicy resource with an empty spec.selector in the Service Mesh control plane's namespace. This policy will be the default authorization policy applied to all workloads in the mesh. For specific migration details, see RbacConfig above. 1.11.4.5.2.5. Mixer plugins Mixer components are disabled by default in version 2.0. If you rely on Mixer plugins for your workload, you must configure your version 2.0 ServiceMeshControlPlane to include the Mixer components. To enable the Mixer policy components, add the following snippet to your ServiceMeshControlPlane . spec: policy: type: Mixer To enable the Mixer telemetry components, add the following snippet to your ServiceMeshControlPlane . spec: telemetry: type: Mixer Legacy mixer plugins can also be migrated to WASM and integrated using the new ServiceMeshExtension (maistra.io/v1alpha1) custom resource. Built-in WASM filters included in the upstream Istio distribution are not available in Red Hat OpenShift Service Mesh 2.0. 1.11.4.5.2.6. Mutual TLS changes When using mTLS with workload specific PeerAuthentication policies, a corresponding DestinationRule is required to allow traffic if the workload policy differs from the namespace/global policy. Auto mTLS is enabled by default, but can be disabled by setting spec.security.dataPlane.automtls to false in the ServiceMeshControlPlane resource. When disabling auto mTLS, DestinationRules may be required for proper communication between services. For example, setting PeerAuthentication to STRICT for one namespace may prevent services in other namespaces from accessing them, unless a DestinationRule configures TLS mode for the services in the namespace. For information about mTLS, see Enabling mutual Transport Layer Security (mTLS) 1.11.4.5.2.6.1. Other mTLS Examples To disable mTLS For productpage service in the bookinfo sample application, your Policy resource was configured the following way for Red Hat OpenShift Service Mesh v1.1. Example Policy resource apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage To disable mTLS For productpage service in the bookinfo sample application, use the following example to configure your PeerAuthentication resource for Red Hat OpenShift Service Mesh v2.0. Example PeerAuthentication resource apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the "productpage" service app: productpage To enable mTLS With JWT authentication for the productpage service in the bookinfo sample application, your Policy resource was configured the following way for Red Hat OpenShift Service Mesh v1.1. Example Policy resource apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: "https://securetoken.google.com" audiences: - "productpage" jwksUri: "https://www.googleapis.com/oauth2/v1/certs" jwtHeaders: - "x-goog-iap-jwt-assertion" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN To enable mTLS With JWT authentication for the productpage service in the bookinfo sample application, use the following example to configure your PeerAuthentication resource for Red Hat OpenShift Service Mesh v2.0. Example PeerAuthentication resource #require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the "productpage" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the "productpage" service app: productpage jwtRules: - issuer: "https://securetoken.google.com" audiences: - "productpage" jwksUri: "https://www.googleapis.com/oauth2/v1/certs" fromHeaders: - name: "x-goog-iap-jwt-assertion" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the "productpage" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - "*" requestPrincipals: - "*" - to: # no JWT token required to access health_check - operation: paths: - /health_check 1.11.4.5.3. Configuration recipes You can configure the following items with these configuration recipes. 1.11.4.5.3.1. Mutual TLS in a data plane Mutual TLS for data plane communication is configured through spec.security.dataPlane.mtls in the ServiceMeshControlPlane resource, which is false by default. 1.11.4.5.3.2. Custom signing key Istiod manages client certificates and private keys used by service proxies. By default, Istiod uses a self-signed certificate for signing, but you can configure a custom certificate and private key. For more information about how to configure signing keys, see Adding an external certificate authority key and certificate 1.11.4.5.3.3. Tracing Tracing is configured in spec.tracing . Currently, the only type of tracer that is supported is Jaeger . Sampling is a scaled integer representing 0.01% increments, for example, 1 is 0.01% and 10000 is 100%. The tracing implementation and sampling rate can be specified: spec: tracing: sampling: 100 # 1% type: Jaeger Jaeger is configured in the addons section of the ServiceMeshControlPlane resource. spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: "100G" storageClassName: "storageclass" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "1Gi" The Jaeger installation can be customized with the install field. Container configuration, such as resource limits is configured in spec.runtime.components.jaeger related fields. If a Jaeger resource matching the value of spec.addons.jaeger.name exists, the Service Mesh control plane will be configured to use the existing installation. Use an existing Jaeger resource to fully customize your Jaeger installation. 1.11.4.5.3.4. Visualization Kiali and Grafana are configured under the addons section of the ServiceMeshControlPlane resource. spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install The Grafana and Kiali installations can be customized through their respective install fields. Container customization, such as resource limits, is configured in spec.runtime.components.kiali and spec.runtime.components.grafana . If an existing Kiali resource matching the value of name exists, the Service Mesh control plane configures the Kiali resource for use with the control plane. Some fields in the Kiali resource are overridden, such as the accessible_namespaces list, as well as the endpoints for Grafana, Prometheus, and tracing. Use an existing resource to fully customize your Kiali installation. 1.11.4.5.3.5. Resource utilization and scheduling Resources are configured under spec.runtime.<component> . The following component names are supported. Component Description Versions supported security Citadel container v1.0/1.1 galley Galley container v1.0/1.1 pilot Pilot/Istiod container v1.0/1.1/2.0 mixer istio-telemetry and istio-policy containers v1.0/1.1 mixer.policy istio-policy container v2.0 mixer.telemetry istio-telemetry container v2.0 global.ouathproxy oauth-proxy container used with various addons v1.0/1.1/2.0 sidecarInjectorWebhook sidecar injector webhook container v1.0/1.1 tracing.jaeger general Jaeger container - not all settings may be applied. Complete customization of Jaeger installation is supported by specifying an existing Jaeger resource in the Service Mesh control plane configuration. v1.0/1.1/2.0 tracing.jaeger.agent settings specific to Jaeger agent v1.0/1.1/2.0 tracing.jaeger.allInOne settings specific to Jaeger allInOne v1.0/1.1/2.0 tracing.jaeger.collector settings specific to Jaeger collector v1.0/1.1/2.0 tracing.jaeger.elasticsearch settings specific to Jaeger elasticsearch deployment v1.0/1.1/2.0 tracing.jaeger.query settings specific to Jaeger query v1.0/1.1/2.0 prometheus prometheus container v1.0/1.1/2.0 kiali Kiali container - complete customization of Kiali installation is supported by specifying an existing Kiali resource in the Service Mesh control plane configuration. v1.0/1.1/2.0 grafana Grafana container v1.0/1.1/2.0 3scale 3scale container v1.0/1.1/2.0 wasmExtensions.cacher WASM extensions cacher container v2.0 - tech preview Some components support resource limiting and scheduling. For more information, see Performance and scalability . 1.11.4.5.4. steps for migrating your applications and workloads Move the application workload to the new mesh and remove the old instances to complete your upgrade. 1.11.5. Upgrading the data plane Your data plane will still function after you have upgraded the control plane. But in order to apply updates to the Envoy proxy and any changes to the proxy configuration, you must restart your application pods and workloads. 1.11.5.1. Updating your applications and workloads To complete the migration, restart all of the application pods in the mesh to upgrade the Envoy sidecar proxies and their configuration. To perform a rolling update of a deployment use the following command: USD oc rollout restart <deployment> You must perform a rolling update for all applications that make up the mesh. 1.12. Managing users and profiles 1.12.1. Creating the Red Hat OpenShift Service Mesh members ServiceMeshMember resources provide a way for Red Hat OpenShift Service Mesh administrators to delegate permissions to add projects to a service mesh, even when the respective users don't have direct access to the service mesh project or member roll. While project administrators are automatically given permission to create the ServiceMeshMember resource in their project, they cannot point it to any ServiceMeshControlPlane until the service mesh administrator explicitly grants access to the service mesh. Administrators can grant users permissions to access the mesh by granting them the mesh-user user role. In this example, istio-system is the name of the Service Mesh control plane project. USD oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name> Administrators can modify the mesh-user role binding in the Service Mesh control plane project to specify the users and groups that are granted access. The ServiceMeshMember adds the project to the ServiceMeshMemberRoll within the Service Mesh control plane project that it references. apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic The mesh-users role binding is created automatically after the administrator creates the ServiceMeshControlPlane resource. An administrator can use the following command to add a role to a user. USD oc policy add-role-to-user The administrator can also create the mesh-user role binding before the administrator creates the ServiceMeshControlPlane resource. For example, the administrator can create it in the same oc apply operation as the ServiceMeshControlPlane resource. This example adds a role binding for alice : apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice 1.12.2. Creating Service Mesh control plane profiles You can create reusable configurations with ServiceMeshControlPlane profiles. Individual users can extend the profiles they create with their own configurations. Profiles can also inherit configuration information from other profiles. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production profiles with team-specific customization. When you configure Service Mesh control plane profiles, which follow the same syntax as the ServiceMeshControlPlane , users inherit settings in a hierarchical fashion. The Operator is delivered with a default profile with default settings for Red Hat OpenShift Service Mesh. 1.12.2.1. Creating the ConfigMap To add custom profiles, you must create a ConfigMap named smcp-templates in the openshift-operators project. The Operator container automatically mounts the ConfigMap . Prerequisites An installed, verified Service Mesh Operator. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Location of the Operator deployment. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster-admin . If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <profiles-directory> with the location of the ServiceMeshControlPlane files on your local disk: USD oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators You can use the profiles parameter in the ServiceMeshControlPlane to specify one or more templates. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default 1.12.2.2. Setting the correct network policy Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route. Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly. Deploy services external to the service mesh in separate namespaces that are not in any service mesh. Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true" , which ensures OpenShift Container Platform routes to these services still work. 1.13. Security If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh help you manage the complexity of your applications and secure microservices. Before you begin If you have a project, add your project to the ServiceMeshMemberRoll resource . If you don't have a project, install the Bookinfo sample application and add it to the ServiceMeshMemberRoll resource. The sample application helps illustrate security concepts. 1.13.1. About mutual Transport Layer Security (mTLS) Mutual Transport Layer Security (mTLS) is a protocol that enables two parties to authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). You can use mTLS without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies. By default, mTLS in Red Hat OpenShift Service Mesh is enabled and set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh. Then, you can enable strict mTLS across your mesh, namespace, or application. Enabling mTLS across your mesh at the Service Mesh control plane level secures all the traffic in your service mesh without rewriting your applications and workloads. You can secure namespaces in your mesh at the data plane level in the ServiceMeshControlPlane resource. To customize traffic encryption connections, configure namespaces at the application level with PeerAuthentication and DestinationRule resources. 1.13.1.1. Enabling strict mTLS across the service mesh If your workloads do not communicate with outside services, you can quickly enable mTLS across your mesh without communication interruptions. You can enable it by setting spec.security.dataPlane.mtls to true in the ServiceMeshControlPlane resource. The Operator creates the required resources. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.3 security: dataPlane: mtls: true You can also enable mTLS by using the OpenShift Container Platform web console. Procedure Log in to the web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the name of your ServiceMeshControlPlane resource, for example, basic . On the Details page, click the toggle in the Security section for Data Plane Security . 1.13.1.1.1. Configuring sidecars for incoming connections for specific services You can also configure mTLS for individual services by creating a policy. Procedure Create a YAML file using the following example. PeerAuthentication Policy example policy.yaml apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT Replace <namespace> with the namespace where the service is located. Run the following command to create the resource in the namespace where the service is located. It must match the namespace field in the Policy resource you just created. USD oc create -n <namespace> -f <policy.yaml> Note If you are not using automatic mTLS and you are setting PeerAuthentication to STRICT, you must create a DestinationRule resource for your service. 1.13.1.1.2. Configuring sidecars for outgoing connections Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh. Procedure Create a YAML file using the following example. DestinationRule example destination-rule.yaml apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: "*.<namespace>.svc.cluster.local" trafficPolicy: tls: mode: ISTIO_MUTUAL Replace <namespace> with the namespace where the service is located. Run the following command to create the resource in the namespace where the service is located. It must match the namespace field in the DestinationRule resource you just created. USD oc create -n <namespace> -f <destination-rule.yaml> 1.13.1.1.3. Setting the minimum and maximum protocol versions If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your Service Mesh control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS. The default is TLS_AUTO and does not specify a version of TLS. Table 1.5. Valid values Value Description TLS_AUTO default TLSv1_0 TLS version 1.0 TLSv1_1 TLS version 1.1 TLSv1_2 TLS version 1.2 TLSv1_3 TLS version 1.3 Procedure Log in to the web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the name of your ServiceMeshControlPlane resource, for example, basic . Click the YAML tab. Insert the following code snippet in the YAML editor. Replace the value in the minProtocolVersion with the TLS version value. In this example, the minimum TLS version is set to TLSv1_2 . ServiceMeshControlPlane snippet kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2 Click Save . Click Refresh to verify that the changes updated correctly. 1.13.1.2. Validating encryption with Kiali The Kiali console offers several ways to validate whether or not your applications, services, and workloads have mTLS encryption enabled. Figure 1.5. Masthead icon mesh-wide mTLS enabled At the right side of the masthead, Kiali shows a lock icon when the mesh has strictly enabled mTLS for the whole service mesh. It means that all communications in the mesh use mTLS. Figure 1.6. Masthead icon mesh-wide mTLS partially enabled Kiali displays a hollow lock icon when either the mesh is configured in PERMISSIVE mode or there is a error in the mesh-wide mTLS configuration. Figure 1.7. Security badge The Graph page has the option to display a Security badge on the graph edges to indicate that mTLS is enabled. To enable security badges on the graph, from the Display menu, under Show Badges , select the Security checkbox. When an edge shows a lock icon, it means at least one request with mTLS enabled is present. In case there are both mTLS and non-mTLS requests, the side-panel will show the percentage of requests that use mTLS. The Applications Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. The Workloads Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. The Services Detail Overview page displays a Security icon on the graph edges where at least one request with mTLS enabled is present. Also note that Kiali displays a lock icon in the Network section to ports that are configured for mTLS. 1.13.2. Configuring Role Based Access Control (RBAC) Role-based access control (RBAC) objects determine whether a user or service is allowed to perform a given action within a project. You can define mesh-, namespace-, and workload-wide access control for your workloads in the mesh. To configure RBAC, create an AuthorizationPolicy resource in the namespace for which you are configuring access. If you are configuring mesh-wide access, use the project where you installed the Service Mesh control plane, for example istio-system . For example, with RBAC, you can create policies that: Configure intra-project communication. Allow or deny full access to all workloads in the default namespace. Allow or deny ingress gateway access. Require a token for access. An authorization policy includes a selector, an action, and a list of rules: The selector field specifies the target of the policy. The action field specifies whether to allow or deny the request. The rules field specifies when to trigger the action. The from field specifies constraints on the request origin. The to field specifies constraints on request target and parameters. The when field specifies additional conditions that to apply the rule. Procedure Create your AuthorizationPolicy resource. The following example shows a resource that updates the ingress-policy AuthorizationPolicy to deny an IP address from accessing the ingress gateway. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: ["1.2.3.4"] Run the following command after you write your resource to create your resource in your namespace. The namespace must match your metadata.namespace field in your AuthorizationPolicy resource. USD oc create -n istio-system -f <filename> steps Consider the following examples for other common configurations. 1.13.2.1. Configure intra-project communication You can use AuthorizationPolicy to configure your Service Mesh control plane to allow or deny the traffic communicating with your mesh or services in your mesh. 1.13.2.1.1. Restrict access to services outside a namespace You can deny requests from any source that is not in the bookinfo namespace with the following AuthorizationPolicy resource example. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: ["bookinfo"] 1.13.2.1.2. Creating allow-all and default deny-all authorization policies The following example shows an allow-all authorization policy that allows full access to all workloads in the bookinfo namespace. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {} The following example shows a policy that denies any access to all workloads in the bookinfo namespace. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {} 1.13.2.2. Allow or deny access to the ingress gateway You can set an authorization policy to add allow or deny lists based on IP addresses. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: ["1.2.3.4", "5.6.7.0/24"] 1.13.2.3. Restrict access with JSON Web Token You can restrict what can access your mesh with a JSON Web Token (JWT). After authentication, a user or service can access routes, services that are associated with that token. Create a RequestAuthentication resource, which defines the authentication methods that are supported by a workload. The following example accepts a JWT issued by http://localhost:8080/auth/realms/master . apiVersion: "security.istio.io/v1beta1" kind: "RequestAuthentication" metadata: name: "jwt-example" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: "http://localhost:8080/auth/realms/master" jwksUri: "http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs" Then, create an AuthorizationPolicy resource in the same namespace to work with RequestAuthentication resource you created. The following example requires a JWT to be present in the Authorization header when sending a request to httpbin workloads. apiVersion: "security.istio.io/v1beta1" kind: "AuthorizationPolicy" metadata: name: "frontend-ingress" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: ["*"] 1.13.3. Configuring cipher suites and ECDH curves Cipher suites and Elliptic-curve Diffie-Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.security.controlplane.tls.cipherSuites and ECDH curves using spec.security.controlplane.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used. The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3. Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384 . Note You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites. The supported cipher suites are: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA The supported ECDH Curves are: CurveP256 CurveP384 CurveP521 X25519 1.13.4. Adding an external certificate authority key and certificate By default, Red Hat OpenShift Service Mesh generates a self-signed root certificate and key and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh. Prerequisites Install Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates. This example uses the certificates from the Maistra repository . For production, use your own certificates from your certificate authority. Deploy the Bookinfo sample application to verify the results with these instructions. OpenSSL is required to verify certificates. 1.13.4.1. Adding an existing certificate and key To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is named ca-cert.pem , the key is ca-key.pem , and the root certificate, which signs ca-cert.pem , is named root-cert.pem . If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file. Save the example certificates from the Maistra repository locally and replace <path> with the path to your certificates. Create a secret named cacert that includes the input files ca-cert.pem , ca-key.pem , root-cert.pem and cert-chain.pem . USD oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \ --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \ --from-file=<path>/cert-chain.pem In the ServiceMeshControlPlane resource set spec.security.dataPlane.mtls true to true and configure the certificateAuthority field as shown in the following example. The default rootCADir is /etc/cacerts . You do not need to set the privateKey if the key and certs are mounted in the default location. Service Mesh reads the certificates and key from the secret-mount files. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts After creating/changing/deleting the cacert secret, the Service Mesh control plane istiod and gateway pods must be restarted so the changes go into effect. Use the following command to restart the pods: USD oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)' The Operator will automatically recreate the pods after they have been deleted. Restart the bookinfo application pods so that the sidecar proxies pick up the secret changes. Use the following command to restart the pods: USD oc -n bookinfo delete pods --all You should see output similar to the following: pod "details-v1-6cd699df8c-j54nh" deleted pod "productpage-v1-5ddcb4b84f-mtmf2" deleted pod "ratings-v1-bdbcc68bc-kmng4" deleted pod "reviews-v1-754ddd7b6f-lqhsv" deleted pod "reviews-v2-675679877f-q67r2" deleted pod "reviews-v3-79d7549c7-c2gjs" deleted Verify that the pods were created and are ready with the following command: USD oc get pods -n bookinfo 1.13.4.2. Verifying your certificates Use the Bookinfo sample application to verify that the workload certificates are signed by the certificates that were plugged into the CA. This requires you have openssl installed on your machine To extract certificates from bookinfo workloads use the following command: USD sleep 60 USD oc -n bookinfo exec "USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt USD sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem USD awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem After running the command, you should have three files in your working directory: proxy-cert-1.pem , proxy-cert-2.pem and proxy-cert-3.pem . Verify that the root certificate is the same as the one specified by the administrator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt Run the following syntax at the terminal window. USD openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt Compare the certificates by running the following syntax at the terminal window. USD diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt You should see the following result: Files /tmp/root-cert.crt.txt and /tmp/pod-root-cert.crt.txt are identical Verify that the CA certificate is the same as the one specified by the administrator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt Run the following syntax at the terminal window. USD openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt Compare the certificates by running the following syntax at the terminal window. USD diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt You should see the following result: Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates. USD openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem You should see the following result: ./proxy-cert-1.pem: OK 1.13.4.3. Removing the certificates To remove the certificates you added, follow these steps. Remove the secret cacerts . In this example, istio-system is the name of the Service Mesh control plane project. USD oc delete secret cacerts -n istio-system Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true 1.14. Managing traffic in your service mesh Using Red Hat OpenShift Service Mesh, you can control the flow of traffic and API calls between services. Some services in your service mesh might need to communicate within the mesh and others might need to be hidden. You can manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services. 1.14.1. Using gateways You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways use the full power and flexibility of traffic routing. The Red Hat OpenShift Service Mesh gateway resource can use layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy. Gateway example A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on. The following example shows a sample gateway configuration for external HTTPS ingress traffic: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn't specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy You can then configure the virtual service with routing rules for the external traffic. 1.14.1.1. Enabling gateway injection Gateway configurations apply to standalone Envoy proxies running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Because gateways are Envoy proxies, you can configure Service Mesh to inject gateways automatically, similar to how you can inject sidecars. Using automatic injection for gateways, you can deploy and manage gateways independent from the ServiceMeshControlPlane resource and manage the gateways with your user applications. Using auto-injection for gateway deployments gives developers full control over the gateway deployment while simplifying operations. When a new upgrade is available, or a configuration has changed, you restart the gateway pods to update them. Doing so makes the experience of operating a gateway deployment the same as operating sidecars. Note Injection is disabled by default for the ServiceMeshControlPlane namespace, for example the istio-system namespace. As a security best practice, deploy gateways in a different namespace from the control plane. 1.14.1.2. Deploying automatic gateway injection When deploying a gateway, you must opt-in to injection by adding an injection label or annotation to the gateway deployment object. The following example deploys a gateway. Prerequisites The namespace must be a member of the mesh by defining it in the ServiceMeshMemberRoll or by creating a ServiceMeshMember resource. Procedure Set a unique label for the Istio ingress gateway. This setting is required to ensure that the gateway can select the workload. This example uses ingressgateway as the name of the gateway. apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: "true" 1 spec: containers: - name: istio-proxy image: auto 2 1 Enable gateway injection by setting the sidecar.istio.io/inject field to "true" . 2 Set the image field to auto so that the image automatically updates each time the pod starts. Set up roles to allow reading credentials for TLS. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default Grant access to the new gateway from outside the cluster, which is required whenever spec.security.manageNetworkPolicy is set to true . apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress Automatically scale the pod when ingress traffic increases. This example sets the minimum replicas to 2 and the maximum replicas to 5 . It also creates another replica when utilization reaches 80%. apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway Specify the minimum number of pods that must be running on the node. This example ensures one replica is running if a pod gets restarted on a new node. apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway 1.14.1.3. Managing ingress traffic In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh. 1.14.1.3.1. Determining the ingress IP and ports Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster's IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc get svc istio-ingressgateway -n istio-system That command returns the NAME , TYPE , CLUSTER-IP , EXTERNAL-IP , PORT(S) , and AGE of each item in your namespace. If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> , or perpetually <pending> , your environment does not provide an external load balancer for the ingress gateway. You can access the gateway using the service's node port . 1.14.1.3.1.1. Determining ingress ports with a load balancer Follow these instructions if your environment has an external load balancer. Procedure Run the following command to set the ingress IP and ports. This command sets a variable in your terminal. USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') Run the following command to set the ingress port. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}') Note In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's EXTERNAL-IP value is not an IP address. Instead, it's a hostname, and the command fails to set the INGRESS_HOST environment variable. In that case, use the following command to correct the INGRESS_HOST value: USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') 1.14.1.3.1.2. Determining ingress ports without a load balancer If your environment does not have an external load balancer, determine the ingress ports and use a node port instead. Procedure Set the ingress ports. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') 1.14.1.4. Configuring an ingress gateway An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests. The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login . Procedure Create a gateway to accept traffic. Create a YAML file, and copy the following YAML into it. Gateway example gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Apply the YAML file. USD oc apply -f gateway.yaml Create a VirtualService object to rewrite the host header. Create a YAML file, and copy the following YAML into it. Virtual service example apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 Apply the YAML file. USD oc apply -f vs.yaml Test that the gateway and VirtualService have been set correctly. Set the Gateway URL. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Set the port number. In this example, istio-system is the name of the Service Mesh control plane project. export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}') Test a page that has been explicitly exposed. curl -s -I "USDGATEWAY_URL/productpage" The expected result is 200 . 1.14.2. Understanding automatic routes OpenShift routes for gateways are automatically managed in Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. 1.14.2.1. Routes with subdomains Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com , are supported, but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host gateway. For more information, see Using wildcard routes . 1.14.2.2. Creating subdomain routes The following example creates a gateway in the Bookinfo sample application, which creates subdomain routes. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com The Gateway resource creates the following OpenShift routes. You can check that the routes are created by using the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system get routes Expected output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None If you delete the gateway, Red Hat OpenShift Service Mesh deletes the routes. However, routes you have manually created are never modified by Red Hat OpenShift Service Mesh. 1.14.2.3. Route labels and annotations Sometimes specific labels or annotations are needed in an OpenShift route. For example, some advanced features in OpenShift routes are managed using special annotations. See "Route-specific annotations" in the following "Additional resources" section. For this and other use cases, Red Hat OpenShift Service Mesh will copy all labels and annotations present in the Istio gateway resource (with the exception of annotations starting with kubectl.kubernetes.io ) into the managed OpenShift route resource. If you need specific labels or annotations in the OpenShift routes created by Service Mesh, create them in the Istio gateway resource and they will be copied into the OpenShift route resources managed by the Service Mesh. Additional resources Route-specific annotations . 1.14.2.4. Disabling automatic route creation By default, the ServiceMeshControlPlane resource automatically synchronizes the Istio gateway resources with OpenShift routes. Disabling the automatic route creation allows you more flexibility to control routes if you have a special case or prefer to control routes manually. 1.14.2.4.1. Disabling automatic route creation for specific cases If you want to disable the automatic management of OpenShift routes for a specific Istio gateway, you must add the annotation maistra.io/manageRoute: false to the gateway metadata definition. Red Hat OpenShift Service Mesh will ignore Istio gateways with this annotation, while keeping the automatic management of the other Istio gateways. 1.14.2.4.2. Disabling automatic route creation for all cases You can disable the automatic management of OpenShift routes for all gateways in your mesh. Disable integration between Istio gateways and OpenShift routes by setting the ServiceMeshControlPlane field gateways.openshiftRoute.enabled to false . For example, see the following resource snippet. apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false 1.14.3. Understanding service entries A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following: Manage traffic for services that run outside of the service mesh. Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure. Define retry, timeout, and fault injection policies for external destinations. Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh. Note Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes. Service entry examples The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem 1.14.4. Using VirtualServices You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can: Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience. Configure traffic rules in combination with gateways to control ingress and egress traffic. 1.14.4.1. Configuring VirtualServices Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh. Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using least requests load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services. Procedure Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application. Example VirtualService.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 Run the following command to apply VirtualService.yaml , where VirtualService.yaml is the path to the file. USD oc apply -f <VirtualService.yaml> 1.14.4.2. VirtualService configuration reference Parameter Description The hosts field lists the virtual service's destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name. The http section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason . Add the headers , end-user , and exact fields to select the appropriate requests. The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name: 1.14.5. Understanding destination rules Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination. By default, Red Hat OpenShift Service Mesh uses a least requests load balancing policy, where the service instance in the pool with the least number of active connections receives the request. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. Random: Requests are forwarded at random to instances in the pool. Weighted: Requests are forwarded to instances in the pool according to a specific percentage. Least requests: Requests are forwarded to instances with the least number of requests. Destination rule example The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 1.14.6. Understanding network policies Red Hat OpenShift Service Mesh automatically creates and manages a number of NetworkPolicies resources in the Service Mesh control plane and application namespaces. This is to ensure that applications and the control plane can communicate with each other. For example, if you have configured your OpenShift Container Platform cluster to use the SDN plugin, Red Hat OpenShift Service Mesh creates a NetworkPolicy resource in each member project. This enables ingress to all pods in the mesh from the other mesh members and the control plane. This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. If you remove a namespace from Service Mesh, this NetworkPolicy resource is deleted from the project. 1.14.6.1. Disabling automatic NetworkPolicy creation If you want to disable the automatic creation and management of NetworkPolicy resources, for example to enforce company security policies, or to allow direct access to pods in the mesh, you can do so. You can edit the ServiceMeshControlPlane and set spec.security.manageNetworkPolicy to false . Note When you disable spec.security.manageNetworkPolicy Red Hat OpenShift Service Mesh will not create any NetworkPolicy objects. The system administrator is responsible for managing the network and fixing any issues this might cause. Prerequisites Red Hat OpenShift Service Mesh Operator version 2.1.1 or higher installed. ServiceMeshControlPlane resource updated to version 2.1 or higher. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the project where you installed the Service Mesh control plane, for example istio-system , from the Project menu. Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane , for example basic-install . On the Create ServiceMeshControlPlane Details page, click YAML to modify your configuration. Set the ServiceMeshControlPlane field spec.security.manageNetworkPolicy to false , as shown in this example. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false Click Save . 1.14.7. Configuring sidecars for traffic management By default, Red Hat OpenShift Service Mesh configures every Envoy proxy to accept traffic on all the ports of its associated workload, and to reach every workload in the mesh when forwarding traffic. You can use a sidecar configuration to do the following: Fine-tune the set of ports and protocols that an Envoy proxy accepts. Limit the set of services that the Envoy proxy can reach. Note To optimize performance of your service mesh, consider limiting Envoy proxy configurations. In the Bookinfo sample application, configure a Sidecar so all services can reach other services running in the same namespace and control plane. This Sidecar configuration is required for using Red Hat OpenShift Service Mesh policy and telemetry features. Procedure Create a YAML file using the following example to specify that you want a sidecar configuration to apply to all workloads in a particular namespace. Otherwise, choose specific workloads using a workloadSelector . Example sidecar.yaml apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - "./*" - "istio-system/*" Run the following command to apply sidecar.yaml , where sidecar.yaml is the path to the file. USD oc apply -f sidecar.yaml Run the following command to verify that the sidecar was created successfully. USD oc get sidecar 1.14.8. Routing Tutorial This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work. 1.14.8.1. Bookinfo routing tutorial The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently. When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other. This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header. Prerequisites: Deploy the Bookinfo sample application to work with the following examples. 1.14.8.2. Applying a virtual service In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services. Procedure Apply the virtual services. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-all-v1.yaml To verify that you applied the virtual services, display the defined routes with the following command: USD oc get virtualservices -o yaml That command returns a resource of kind: VirtualService in YAML format. You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1. 1.14.8.3. Testing the new route configuration Test the new configuration by refreshing the /productpage of the Bookinfo application. Procedure Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Run the following command to retrieve the URL for the product page. echo "http://USDGATEWAY_URL/productpage" Open the Bookinfo site in your browser. The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service. Your service mesh now routes traffic to one version of a service. 1.14.8.4. Route based on user identity Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2 . Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service. Procedure Run the following command to enable user-based routing in the Bookinfo sample application. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format. USD oc get virtualservice reviews -o yaml On the /productpage of the Bookinfo app, log in as user jason with no password. Refresh the browser. The star ratings appear to each review. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason. You have successfully configured the Bookinfo sample application to route traffic based on user identity. 1.15. Metrics, logs, and traces Once you have added your application to the mesh, you can observe the data flow through your application. If you do not have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application . 1.15.1. Discovering console addresses Red Hat OpenShift Service Mesh provides the following consoles to view your service mesh data: Kiali console - Kiali is the management console for Red Hat OpenShift Service Mesh. Jaeger console - Jaeger is the management console for Red Hat OpenShift distributed tracing. Grafana console - Grafana provides mesh administrators with advanced query and metrics analysis and dashboards for Istio data. Optionally, Grafana can be used to analyze service mesh metrics. Prometheus console - Red Hat OpenShift Service Mesh uses Prometheus to store telemetry information from services. When you install the Service Mesh control plane, it automatically generates routes for each of the installed components. Once you have the route address, you can access the Kiali, Jaeger, Prometheus, or Grafana console to view and manage your service mesh data. Prerequisite The component must be enabled and installed. For example, if you did not install distributed tracing, you will not be able to access the Jaeger console. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the component console whose route you want to access. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Switch to the Service Mesh control plane project. In this example, istio-system is the Service Mesh control plane project. Run the following command: USD oc project istio-system To get the routes for the various Red Hat OpenShift Service Mesh consoles, run the folowing command: USD oc get routes This command returns the URLs for the Kiali, Jaeger, Prometheus, and Grafana web consoles, and any other routes in your service mesh. You should see output similar to the following: NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect Copy the URL for the console you want to access from the HOST/PORT column into a browser to open the console. Click Log In With OpenShift . 1.15.2. Accessing the Kiali console You can view your application's topology, health, and metrics in the Kiali console. If your service is experiencing problems, the Kiali console lets you view the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. Kiali also provides an interactive graph view of your namespace in real time. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed, Kiali installed and configured. The installation process creates a route to access the Kiali console. If you know the URL for the Kiali console, you can access it directly. If you do not know the URL, use the following directions. Procedure for administrators Log in to the OpenShift Container Platform web console with an administrator role. Click Home Projects . On the Projects page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project details page, in the Launcher section, click the Kiali link. Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation and namespaces have not yet been added to the mesh, there might not be any data to display other than istio-system . Procedure for developers Log in to the OpenShift Container Platform web console with a developer role. Click Project . On the Project Details page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project page, in the Launcher section, click the Kiali link. Click Log In With OpenShift . 1.15.3. Viewing service mesh data in the Kiali console The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology. There are several graphs to choose from: The App graph shows an aggregate workload for all applications that are labeled the same. The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services. The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together. The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph. Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities. Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel. 1.15.3.1. Changing graph layouts in Kiali The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. Prerequisites If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. Procedure Launch the Kiali console. Click Log In With OpenShift . In Kiali console, click Graph to view a namespace graph. From the Namespace menu, select your application namespace, for example, bookinfo . To choose a different graph layout, do either or both of the following: Select different graph data groupings from the menu at the top of the graph. App graph Service graph Versioned App graph (default) Workload graph Select a different graph layout from the Legend at the bottom of the graph. Layout default dagre Layout 1 cose-bilkent Layout 2 cola 1.15.3.2. Viewing logs in the Kiali console You can view logs for your workloads in the Kiali console. The Workload Detail page includes a Logs tab which displays a unified logs view that displays both application and proxy logs. You can select how often you want the log display in Kiali to be refreshed. To change the logging level on the logs displayed in Kiali, you change the logging configuration for the workload or the proxy. Prerequisites Service Mesh installed and configured. Kiali installed and configured. The address for the Kiali console. Application or Bookinfo sample application added to the mesh. Procedure Launch the Kiali console. Click Log In With OpenShift . The Kiali Overview page displays namespaces that have been added to the mesh that you have permissions to view. Click Workloads . On the Workloads page, select the project from the Namespace menu. If necessary, use the filter to find the workload whose logs you want to view. Click the workload Name . For example, click ratings-v1 . On the Workload Details page, click the Logs tab to view the logs for the workload. Tip If you do not see any log entries, you may need to adjust either the Time Range or the Refresh interval. 1.15.3.3. Viewing metrics in the Kiali console You can view inbound and outbound metrics for your applications, workloads, and services in the Kiali console. The Detail pages include the following tabs: inbound Application metrics outbound Application metrics inbound Workload metrics outbound Workload metrics inbound Service metrics These tabs display predefined metrics dashboards, tailored to the relevant application, workload or service level. The application and workload detail views show request and response metrics such as volume, duration, size, or TCP traffic. The service detail view shows request and response metrics for inbound traffic only. Kiali lets you customize the charts by choosing the charted dimensions. Kiali can also present metrics reported by either source or destination proxy metrics. And for troubleshooting, Kiali can overlay trace spans on the metrics. Prerequisites Service Mesh installed and configured. Kiali installed and configured. The address for the Kiali console. (Optional) Distributed tracing installed and configured. Procedure Launch the Kiali console. Click Log In With OpenShift . The Kiali Overview page displays namespaces that have been added to the mesh that you have permissions to view. Click either Applications , Workloads , or Services . On the Applications , Workloads , or Services page, select the project from the Namespace menu. If necessary, use the filter to find the application, workload, or service whose logs you want to view. Click the Name . On the Application Detail , Workload Details , or Service Details page, click either the Inbound Metrics or Outbound Metrics tab to view the metrics. 1.15.4. Distributed tracing Distributed tracing is the process of tracking the performance of individual services in an application by tracing the path of the service calls in the application. Each time a user takes action in an application, a request is executed that might require many services to interact to produce a response. The path of this request is called a distributed transaction. Red Hat OpenShift Service Mesh uses Red Hat OpenShift distributed tracing to allow developers to view call flows in a microservice application. 1.15.4.1. Connecting an existing distributed tracing instance If you already have an existing Red Hat OpenShift distributed tracing platform instance in OpenShift Container Platform, you can configure your ServiceMeshControlPlane resource to use that instance for distributed tracing. Prerequisites Red Hat OpenShift distributed tracing instance installed and configured. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane resource, for example basic . Add the name of your distributed tracing platform instance to the ServiceMeshControlPlane . Click the YAML tab. Add the name of your distributed tracing platform instance to spec.addons.jaeger.name in your ServiceMeshControlPlane resource. In the following example, distr-tracing-production is the name of the distributed tracing platform instance. Example distributed tracing configuration spec: addons: jaeger: name: distr-tracing-production Click Save . Click Reload to verify the ServiceMeshControlPlane resource was configured correctly. 1.15.4.2. Adjusting the sampling rate A trace is an execution path between services in the service mesh. A trace is comprised of one or more spans. A span is a logical unit of work that has a name, start time, and duration. The sampling rate determines how often a trace is persisted. The Envoy proxy sampling rate is set to sample 100% of traces in your service mesh by default. A high sampling rate consumes cluster resources and performance but is useful when debugging issues. Before you deploy Red Hat OpenShift Service Mesh in production, set the value to a smaller proportion of traces. For example, set spec.tracing.sampling to 100 to sample 1% of traces. Configure the Envoy proxy sampling rate as a scaled integer representing 0.01% increments. In a basic installation, spec.tracing.sampling is set to 10000 , which samples 100% of traces. For example: Setting the value to 10 samples 0.1% of traces. Setting the value to 500 samples 5% of traces. Note The Envoy proxy sampling rate applies for applications that are available to a Service Mesh, and use the Envoy proxy. This sampling rate determines how much data the Envoy proxy collects and tracks. The Jaeger remote sampling rate applies to applications that are external to the Service Mesh, and do not use the Envoy proxy, such as a database. This sampling rate determines how much data the distributed tracing system collects and stores. For more information, see Distributed tracing configuration options . Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Click the Project menu and select the project where you installed the control plane, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane resource, for example basic . To adjust the sampling rate, set a different value for spec.tracing.sampling . Click the YAML tab. Set the value for spec.tracing.sampling in your ServiceMeshControlPlane resource. In the following example, set it to 100 . Jaeger sampling example spec: tracing: sampling: 100 Click Save . Click Reload to verify the ServiceMeshControlPlane resource was configured correctly. 1.15.5. Accessing the Jaeger console To access the Jaeger console you must have Red Hat OpenShift Service Mesh installed, Red Hat OpenShift distributed tracing platform installed and configured. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from Kiali console Launch the Kiali console. Click Distributed Tracing in the left navigation pane. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, istio-system is the Service Mesh control plane namespace. USD export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. For more information about configuring Jaeger, see the distributed tracing documentation . 1.15.6. Accessing the Grafana console Grafana is an analytics tool you can use to view, query, and analyze your service mesh metrics. In this example, istio-system is the Service Mesh control plane namespace. To access Grafana, do the following: Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Routes . Click the link in the Location column for the Grafana row. Log in to the Grafana console with your OpenShift Container Platform credentials. 1.15.7. Accessing the Prometheus console Prometheus is a monitoring and alerting tool that you can use to collect multi-dimensional data about your microservices. In this example, istio-system is the Service Mesh control plane namespace. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click Routes . Click the link in the Location column for the Prometheus row. Log in to the Prometheus console with your OpenShift Container Platform credentials. 1.16. Performance and scalability The default ServiceMeshControlPlane settings are not intended for production use; they are designed to install successfully on a default OpenShift Container Platform installation, which is a resource-limited environment. After you have verified a successful SMCP installation, you should modify the settings defined within the SMCP to suit your environment. 1.16.1. Setting limits on compute resources By default, spec.proxy has the settings cpu: 10m and memory: 128M . If you are using Pilot, spec.runtime.components.pilot has the same default values. The settings in the following example are based on 1,000 services and 1,000 requests per second. You can change the values for cpu and memory in the ServiceMeshControlPlane . Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. In the Istio Service Mesh Control Plane column, click the name of your ServiceMeshControlPlane , for example basic . Add the name of your standalone Jaeger instance to the ServiceMeshControlPlane . Click the YAML tab. Set the values for spec.proxy.runtime.container.resources.requests.cpu and spec.proxy.runtime.container.resources.requests.memory in your ServiceMeshControlPlane resource. Example version 2.3 ServiceMeshControlPlane apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} Click Save . Click Reload to verify the ServiceMeshControlPlane resource was configured correctly. 1.16.2. Load test results The upstream Istio community load tests mesh consists of 1000 services and 2000 sidecars with 70,000 mesh-wide requests per second. Running the tests using Istio 1.12.3, generated the following results: The Envoy proxy uses 0.35 vCPU and 40 MB memory per 1000 requests per second going through the proxy. Istiod uses 1 vCPU and 1.5 GB of memory. The Envoy proxy adds 2.65 ms to the 90th percentile latency. The legacy istio-telemetry service (disabled by default in Service Mesh 2.0) uses 0.6 vCPU per 1000 mesh-wide requests per second for deployments that use Mixer. The data plane components, the Envoy proxies, handle data flowing through the system. The Service Mesh control plane component, Istiod, configures the data plane. The data plane and control plane have distinct performance concerns. 1.16.2.1. Service Mesh Control plane performance Istiod configures sidecar proxies based on user authored configuration files and the current state of the system. In a Kubernetes environment, Custom Resource Definitions (CRDs) and deployments constitute the configuration and state of the system. The Istio configuration objects like gateways and virtual services, provide the user-authored configuration. To produce the configuration for the proxies, Istiod processes the combined configuration and system state from the Kubernetes environment and the user-authored configuration. The Service Mesh control plane supports thousands of services, spread across thousands of pods with a similar number of user authored virtual services and other configuration objects. Istiod's CPU and memory requirements scale with the number of configurations and possible system states. The CPU consumption scales with the following factors: The rate of deployment changes. The rate of configuration changes. The number of proxies connecting to Istiod. However this part is inherently horizontally scalable. 1.16.2.2. Data plane performance Data plane performance depends on many factors, for example: Number of client connections Target request rate Request size and response size Number of proxy worker threads Protocol CPU cores Number and types of proxy filters, specifically telemetry v2 related filters. The latency, throughput, and the proxies' CPU and memory consumption are measured as a function of these factors. 1.16.2.2.1. CPU and memory consumption Since the sidecar proxy performs additional work on the data path, it consumes CPU and memory. As of Istio 1.12.3, a proxy consumes about 0.5 vCPU per 1000 requests per second. The memory consumption of the proxy depends on the total configuration state the proxy holds. A large number of listeners, clusters, and routes can increase memory usage. Since the proxy normally doesn't buffer the data passing through, request rate doesn't affect the memory consumption. 1.16.2.2.2. Additional latency Since Istio injects a sidecar proxy on the data path, latency is an important consideration. Istio adds an authentication filter, a telemetry filter, and a metadata exchange filter to the proxy. Every additional filter adds to the path length inside the proxy and affects latency. The Envoy proxy collects raw telemetry data after a response is sent to the client. The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request. However, since the worker is busy handling the request, the worker won't start handling the request immediately. This process adds to the queue wait time of the request and affects average and tail latencies. The actual tail latency depends on the traffic pattern. Inside the mesh, a request traverses the client-side proxy and then the server-side proxy. In the default configuration of Istio 1.12.3 (that is, Istio with telemetry v2), the two proxies add about 1.7 ms and 2.7 ms to the 90th and 99th percentile latency, respectively, over the baseline data plane latency. 1.17. Configuring Service Mesh for production When you are ready to move from a basic installation to production, you must configure your control plane, tracing, and security certificates to meet production requirements. Prerequisites Install and configure Red Hat OpenShift Service Mesh. Test your configuration in a staging environment. 1.17.1. Configuring your ServiceMeshControlPlane resource for production If you have installed a basic ServiceMeshControlPlane resource to test Service Mesh, you must configure it to production specification before you use Red Hat OpenShift Service Mesh in production. You cannot change the metadata.name field of an existing ServiceMeshControlPlane resource. For production deployments, you must customize the default template. Procedure Configure the distributed tracing platform for production. Edit the ServiceMeshControlPlane resource to use the production deployment strategy, by setting spec.addons.jaeger.install.storage.type to Elasticsearch and specify additional configuration options under install . You can create and configure your Jaeger instance and set spec.addons.jaeger.name to the name of the Jaeger instance. Default Jaeger parameters including Elasticsearch apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {} Configure the sampling rate for production. For more information, see the Performance and scalability section. Ensure your security certificates are production ready by installing security certificates from an external certificate authority. For more information, see the Security section. Verify the results. Enter the following command to verify that the ServiceMeshControlPlane resource updated properly. In this example, basic is the name of the ServiceMeshControlPlane resource. USD oc get smcp basic -o yaml 1.17.2. Additional resources For more information about tuning Service Mesh for performance, see Performance and scalability . 1.18. Connecting service meshes Federation is a deployment model that lets you share services and workloads between separate meshes managed in distinct administrative domains. 1.18.1. Federation overview Federation is a set of features that let you connect services between separate meshes, allowing the use of Service Mesh features such as authentication, authorization, and traffic management across multiple, distinct administrative domains. Implementing a federated mesh lets you run, manage, and observe a single service mesh running across multiple OpenShift clusters. Red Hat OpenShift Service Mesh federation takes an opinionated approach to a multi-cluster implementation of Service Mesh that assumes minimal trust between meshes. Service Mesh federation assumes that each mesh is managed individually and retains its own administrator. The default behavior is that no communication is permitted and no information is shared between meshes. The sharing of information between meshes is on an explicit opt-in basis. Nothing is shared in a federated mesh unless it has been configured for sharing. Support functions such as certificate generation, metrics and trace collection remain local in their respective meshes. You configure the ServiceMeshControlPlane on each service mesh to create ingress and egress gateways specifically for the federation, and to specify the trust domain for the mesh. Federation also involves the creation of additional federation files. The following resources are used to configure the federation between two or more meshes. A ServiceMeshPeer resource declares the federation between a pair of service meshes. An ExportedServiceSet resource declares that one or more services from the mesh are available for use by a peer mesh. An ImportedServiceSet resource declares which services exported by a peer mesh will be imported into the mesh. 1.18.2. Federation features Features of the Red Hat OpenShift Service Mesh federated approach to joining meshes include the following: Supports common root certificates for each mesh. Supports different root certificates for each mesh. Mesh administrators must manually configure certificate chains, service discovery endpoints, trust domains, etc for meshes outside of the Federated mesh. Only export/import the services that you want to share between meshes. Defaults to not sharing information about deployed workloads with other meshes in the federation. A service can be exported to make it visible to other meshes and allow requests from workloads outside of its own mesh. A service that has been exported can be imported to another mesh, enabling workloads on that mesh to send requests to the imported service. Encrypts communication between meshes at all times. Supports configuring load balancing across workloads deployed locally and workloads that are deployed in another mesh in the federation. When a mesh is joined to another mesh it can do the following: Provide trust details about itself to the federated mesh. Discover trust details about the federated mesh. Provide information to the federated mesh about its own exported services. Discover information about services exported by the federated mesh. 1.18.3. Federation security Red Hat OpenShift Service Mesh federation takes an opinionated approach to a multi-cluster implementation of Service Mesh that assumes minimal trust between meshes. Data security is built in as part of the federation features. Each mesh is considered to be a unique tenant, with a unique administration. You create a unique trust domain for each mesh in the federation. Traffic between the federated meshes is automatically encrypted using mutual Transport Layer Security (mTLS). The Kiali graph only displays your mesh and services that you have imported. You cannot see the other mesh or services that have not been imported into your mesh. 1.18.4. Federation limitations The Red Hat OpenShift Service Mesh federated approach to joining meshes has the following limitations: Federation of meshes is not supported on OpenShift Dedicated. 1.18.5. Federation prerequisites The Red Hat OpenShift Service Mesh federated approach to joining meshes has the following prerequisites: Two or more OpenShift Container Platform 4.6 or above clusters. Federation was introduced in Red Hat OpenShift Service Mesh 2.1 or later. You must have the Red Hat OpenShift Service Mesh 2.1 or later Operator installed on each mesh that you want to federate. You must have a version 2.1 or later ServiceMeshControlPlane deployed on each mesh that you want to federate. You must configure the load balancers supporting the services associated with the federation gateways to support raw TLS traffic. Federation traffic consists of HTTPS for discovery and raw encrypted TCP for service traffic. Services that you want to expose to another mesh should be deployed before you can export and import them. However, this is not a strict requirement. You can specify service names that do not yet exist for export/import. When you deploy the services named in the ExportedServiceSet and ImportedServiceSet they will be automatically made available for export/import. 1.18.6. Planning your mesh federation Before you start configuring your mesh federation, you should take some time to plan your implementation. How many meshes do you plan to join in a federation? You probably want to start with a limited number of meshes, perhaps two or three. What naming convention do you plan to use for each mesh? Having a pre-defined naming convention will help with configuration and troubleshooting. The examples in this documentation use different colors for each mesh. You should decide on a naming convention that will help you determine who owns and manages each mesh, as well as the following federation resources: Cluster names Cluster network names Mesh names and namespaces Federation ingress gateways Federation egress gateways Security trust domains Note Each mesh in the federation must have its own unique trust domain. Which services from each mesh do you plan to export to the federated mesh? Each service can be exported individually, or you can specify labels or use wildcards. Do you want to use aliases for the service namespaces? Do you want to use aliases for the exported services? Which exported services does each mesh plan to import? Each mesh only imports the services that it needs. Do you want to use aliases for the imported services? 1.18.7. Mesh federation across clusters To connect one instance of the OpenShift Service Mesh with one running in a different cluster, the procedure is not much different as when connecting two meshes deployed in the same cluster. However, the ingress gateway of one mesh must be reachable from the other mesh. One way of ensuring this is to configure the gateway service as a LoadBalancer service if the cluster supports this type of service. The service must be exposed through a load balancer that operates at Layer4 of the OSI model. 1.18.7.1. Exposing the federation ingress on clusters running on bare metal If the cluster runs on bare metal and fully supports LoadBalancer services, the IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. If the cluster does not support LoadBalancer services, using a NodePort service could be an option if the nodes are accessible from the cluster running the other mesh. In the ServiceMeshPeer object, specify the IP addresses of the nodes in the .spec.remote.addresses field and the service's node ports in the .spec.remote.discoveryPort and .spec.remote.servicePort fields. 1.18.7.2. Exposing the federation ingress on clusters running on IBM Power and IBM Z If the cluster runs on IBM Power or IBM Z infrastructure and fully supports LoadBalancer services, the IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. If the cluster does not support LoadBalancer services, using a NodePort service could be an option if the nodes are accessible from the cluster running the other mesh. In the ServiceMeshPeer object, specify the IP addresses of the nodes in the .spec.remote.addresses field and the service's node ports in the .spec.remote.discoveryPort and .spec.remote.servicePort fields. 1.18.7.3. Exposing the federation ingress on Amazon Web Services (AWS) By default, LoadBalancer services in clusters running on AWS do not support L4 load balancing. In order for Red Hat OpenShift Service Mesh federation to operate correctly, the following annotation must be added to the ingress gateway service: service.beta.kubernetes.io/aws-load-balancer-type: nlb The Fully Qualified Domain Name found in the .status.loadBalancer.ingress.hostname field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. 1.18.7.4. Exposing the federation ingress on Azure On Microsoft Azure, merely setting the service type to LoadBalancer suffices for mesh federation to operate correctly. The IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. 1.18.7.5. Exposing the federation ingress on Google Cloud Platform (GCP) On Google Cloud Platform, merely setting the service type to LoadBalancer suffices for mesh federation to operate correctly. The IP address found in the .status.loadBalancer.ingress.ip field of the ingress gateway Service object should be specified as one of the entries in the .spec.remote.addresses field of the ServiceMeshPeer object. 1.18.8. Federation implementation checklist Federating services meshes involves the following activities: ❏ Configure networking between the clusters that you are going to federate. ❏ Configure the load balancers supporting the services associated with the federation gateways to support raw TLS traffic. ❏ Installing the Red Hat OpenShift Service Mesh version 2.1 or later Operator in each of your clusters. ❏ Deploying a version 2.1 or later ServiceMeshControlPlane to each of your clusters. ❏ Configuring the SMCP for federation for each mesh that you want to federate: ❏ Create a federation egress gateway for each mesh you are going to federate with. ❏ Create a federation ingress gateway for each mesh you are going to federate with. ❏ Configure a unique trust domain. ❏ Federate two or more meshes by creating a ServiceMeshPeer resource for each mesh pair. ❏ Export services by creating an ExportedServiceSet resource to make services available from one mesh to a peer mesh. ❏ Import services by creating an ImportedServiceSet resource to import services shared by a mesh peer. 1.18.9. Configuring a Service Mesh control plane for federation Before a mesh can be federated, you must configure the ServiceMeshControlPlane for mesh federation. Because all meshes that are members of the federation are equal, and each mesh is managed independently, you must configure the SMCP for each mesh that will participate in the federation. In the following example, the administrator for the red-mesh is configuring the SMCP for federation with both the green-mesh and the blue-mesh . Sample SMCP for red-mesh apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.3 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local Table 1.6. ServiceMeshControlPlane federation configuration parameters Parameter Description Values Default value Name of the cluster. You are not required to specify a cluster name, but it is helpful for troubleshooting. String N/A Name of the cluster network. You are not required to specify a name for the network, but it is helpful for configuration and troubleshooting. String N/A 1.18.9.1. Understanding federation gateways You use a gateway to manage inbound and outbound traffic for your mesh, letting you specify which traffic you want to enter or leave the mesh. You use ingress and egress gateways to manage traffic entering and leaving the service mesh (North-South traffic). When you create a federated mesh, you create additional ingress/egress gateways, to facilitate service discovery between federated meshes, communication between federated meshes, and to manage traffic flow between service meshes (East-West traffic). To avoid naming conflicts between meshes, you must create separate egress and ingress gateways for each mesh. For example, red-mesh would have separate egress gateways for traffic going to green-mesh and blue-mesh . Table 1.7. Federation gateway parameters Parameter Description Values Default value Define an additional egress gateway for each mesh peer in the federation. This parameter enables or disables the federation egress. true / false true Networks associated with exported services. Set to the value of spec.cluster.network in the SMCP for the mesh, otherwise use <ServiceMeshPeer-name>-network. For example, if the ServiceMeshPeer resource for that mesh is named west , then the network would be named west-network . The router mode to be used by the gateway. sni-dnat Specify a unique label for the gateway to prevent federated traffic from flowing through the cluster's default system gateways. Used to specify the port: and name: used for TLS and service discovery. Federation traffic consists of raw encrypted TCP for service traffic. Port 15443 is required for sending TLS service requests to other meshes in the federation. Port 8188 is required for sending service discovery requests to other meshes in the federation. Define an additional ingress gateway gateway for each mesh peer in the federation. This parameter enables or disables the federation ingress. true / false true The router mode to be used by the gateway. sni-dnat The ingress gateway service must be exposed through a load balancer that operates at Layer 4 of the OSI model and is publicly available. LoadBalancer If the cluster does not support LoadBalancer services, the ingress gateway service can be exposed through a NodePort service. NodePort Specify a unique label for the gateway to prevent federated traffic from flowing through the cluster's default system gateways. Used to specify the port: and name: used for TLS and service discovery. Federation traffic consists of raw encrypted TCP for service traffic. Federation traffic consists of HTTPS for discovery. Port 15443 is required for receiving TLS service requests to other meshes in the federation. Port 8188 is required for receiving service discovery requests to other meshes in the federation. Used to specify the nodePort: if the cluster does not support LoadBalancer services. If specified, is required in addition to port: and name: for both TLS and service discovery. nodePort: must be in the range 30000 - 32767 . In the following example, the administrator is configuring the SMCP for federation with the green-mesh using a NodePort service. Sample SMCP for NodePort gateways: additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery 1.18.9.2. Understanding federation trust domain parameters Each mesh in the federation must have its own unique trust domain. This value is used when configuring mesh federation in the ServiceMeshPeer resource. kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local Table 1.8. Federation security parameters Parameter Description Values Default value Used to specify a unique name for the trust domain for the mesh. Domains must be unique for every mesh in the federation. <mesh-name>.local N/A Procedure from the Console Follow this procedure to edit the ServiceMeshControlPlane with the OpenShift Container Platform web console. This example uses the red-mesh as an example. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Project menu and select the project where you installed the Service Mesh control plane. For example, red-mesh-system . Click the Red Hat OpenShift Service Mesh Operator. On the Istio Service Mesh Control Plane tab, click the name of your ServiceMeshControlPlane , for example red-mesh . On the Create ServiceMeshControlPlane Details page, click YAML to modify your configuration. Modify your ServiceMeshControlPlane to add federation ingress and egress gateways and to specify the trust domain. Click Save . Procedure from the CLI Follow this procedure to create or edit the ServiceMeshControlPlane with the command line. This example uses the red-mesh as an example. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane, for example red-mesh-system. USD oc project red-mesh-system Edit the ServiceMeshControlPlane file to add federation ingress and egress gateways and to specify the trust domain. Run the following command to edit the Service Mesh control plane where red-mesh-system is the system namespace and red-mesh is the name of the ServiceMeshControlPlane object: USD oc edit -n red-mesh-system smcp red-mesh Enter the following command, where red-mesh-system is the system namespace, to see the status of the Service Mesh control plane installation. USD oc get smcp -n red-mesh-system The installation has finished successfully when the READY column indicates that all components are ready. 1.18.10. Joining a federated mesh You declare the federation between two meshes by creating a ServiceMeshPeer resource. The ServiceMeshPeer resource defines the federation between two meshes, and you use it to configure discovery for the peer mesh, access to the peer mesh, and certificates used to validate the other mesh's clients. Meshes are federated on a one-to-one basis, so each pair of peers requires a pair of ServiceMeshPeer resources specifying the federation connection to the other service mesh. For example, federating two meshes named red and green would require two ServiceMeshPeer files. On red-mesh-system, create a ServiceMeshPeer for the green mesh. On green-mesh-system, create a ServiceMeshPeer for the red mesh. Federating three meshes named red , blue , and green would require six ServiceMeshPeer files. On red-mesh-system, create a ServiceMeshPeer for the green mesh. On red-mesh-system, create a ServiceMeshPeer for the blue mesh. On green-mesh-system, create a ServiceMeshPeer for the red mesh. On green-mesh-system, create a ServiceMeshPeer for the blue mesh. On blue-mesh-system, create a ServiceMeshPeer for the red mesh. On blue-mesh-system, create a ServiceMeshPeer for the green mesh. Configuration in the ServiceMeshPeer resource includes the following: The address of the other mesh's ingress gateway, which is used for discovery and service requests. The names of the local ingress and egress gateways that is used for interactions with the specified peer mesh. The client ID used by the other mesh when sending requests to this mesh. The trust domain used by the other mesh. The name of a ConfigMap containing a root certificate that is used to validate client certificates in the trust domain used by the other mesh. In the following example, the administrator for the red-mesh is configuring federation with the green-mesh . Example ServiceMeshPeer resource for red-mesh kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert Table 1.9. ServiceMeshPeer configuration parameters Parameter Description Values Name of the peer mesh that this resource is configuring federation with. String System namespace for this mesh, that is, where the Service Mesh control plane is installed. String List of public addresses of the peer meshes' ingress gateways that are servicing requests from this mesh. The port on which the addresses are handling discovery requests. Defaults to 8188 The port on which the addresses are handling service requests. Defaults to 15443 Name of the ingress on this mesh that is servicing requests received from the peer mesh. For example, ingress-green-mesh . Name of the egress on this mesh that is servicing requests sent to the peer mesh. For example, egress-green-mesh . The trust domain used by the peer mesh. <peerMeshName>.local The client ID used by the peer mesh when calling into this mesh. <peerMeshTrustDomain>/ns/<peerMeshSystem>/sa/<peerMeshEgressGatewayName>-service-account The kind (for example, ConfigMap) and name of a resource containing the root certificate used to validate the client and server certificate(s) presented to this mesh by the peer mesh. The key of the config map entry containing the certificate should be root-cert.pem . kind: ConfigMap name: <peerMesh>-ca-root-cert 1.18.10.1. Creating a ServiceMeshPeer resource Prerequisites Two or more OpenShift Container Platform 4.6 or above clusters. The clusters must already be networked. The load balancers supporting the services associated with the federation gateways must be configured to support raw TLS traffic. Each cluster must have a version 2.1 or later ServiceMeshControlPlane configured to support federation deployed. An account with the cluster-admin role. Procedure from the CLI Follow this procedure to create a ServiceMeshPeer resource from the command line. This example shows the red-mesh creating a peer resource for the green-mesh . Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the control plane, for example, red-mesh-system . USD oc project red-mesh-system Create a ServiceMeshPeer file based the following example for the two meshes that you want to federate. Example ServiceMeshPeer resource for red-mesh to green-mesh kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert Run the following command to deploy the resource, where red-mesh-system is the system namespace and servicemeshpeer.yaml includes a full path to the file you edited: USD oc create -n red-mesh-system -f servicemeshpeer.yaml To confirm that connection between the red mesh and green mesh is established, inspect the status of the green-mesh ServiceMeshPeer in the red-mesh-system namespace: USD oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml Example ServiceMeshPeer connection between red-mesh and green-mesh status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: "2021-10-05T13:02:25Z" lastFullSync: "2021-10-05T13:02:25Z" source: 10.128.2.149 watch: connected: true lastConnected: "2021-10-05T13:02:55Z" lastDisconnectStatus: 503 Service Unavailable lastFullSync: "2021-10-05T13:05:43Z" The status.discoveryStatus.active.remotes field shows that istiod in the peer mesh (in this example, the green mesh) is connected to istiod in the current mesh (in this example, the red mesh). The status.discoveryStatus.active.watch field shows that istiod in the current mesh is connected to istiod in the peer mesh. If you check the servicemeshpeer named red-mesh in green-mesh-system , you'll find information about the same two connections from the perspective of the green mesh. When the connection between two meshes is not established, the ServiceMeshPeer status indicates this in the status.discoveryStatus.inactive field. For more information on why a connection attempt failed, inspect the Istiod log, the access log of the egress gateway handling egress traffic for the peer, and the ingress gateway handling ingress traffic for the current mesh in the peer mesh. For example, if the red mesh can't connect to the green mesh, check the following logs: istiod-red-mesh in red-mesh-system egress-green-mesh in red-mesh-system ingress-red-mesh in green-mesh-system 1.18.11. Exporting a service from a federated mesh Exporting services allows a mesh to share one or more of its services with another member of the federated mesh. You use an ExportedServiceSet resource to declare the services from one mesh that you are making available to another peer in the federated mesh. You must explicitly declare each service to be shared with a peer. You can select services by namespace or name. You can use wildcards to select services; for example, to export all the services in a namespace. You can export services using an alias. For example, you can export the foo/bar service as custom-ns/bar . You can only export services that are visible to the mesh's system namespace. For example, a service in another namespace with a networking.istio.io/exportTo label set to '.' would not be a candidate for export. For exported services, their target services will only see traffic from the ingress gateway, not the original requestor (that is, they won't see the client ID of either the other mesh's egress gateway or the workload originating the request) The following example is for services that red-mesh is exporting to green-mesh . Example ExportedServiceSet resource kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: "true" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: "*" name: "*" alias: namespace: bookinfo Table 1.10. ExportedServiceSet parameters Parameter Description Values Name of the ServiceMeshPeer you are exposing this service to. Must match the name value for the mesh in the ServiceMeshPeer resource. Name of the project/namespace containing this resource (should be the system namespace for the mesh) . Type of rule that will govern the export for this service. The first matching rule found for the service will be used for the export. NameSelector , LabelSelector To create a NameSelector rule, specify the namespace of the service and the name of the service as defined in the Service resource. To create a NameSelector rule that uses an alias for the service, after specifying the namespace and name for the service, then specify the alias for the namespace and the alias to be used for name of the service. To create a LabelSelector rule, specify the namespace of the service and specify the label defined in the Service resource. In the example above, the label is export-service . To create a LabelSelector rule that uses aliases for the services, after specifying the selector , specify the aliases to be used for name or namespace of the service. In the example above, the namespace alias is bookinfo for all matching services. Export services with the name "ratings" from all namespaces in the red-mesh to blue-mesh. kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: "*" name: ratings Export all services from the west-data-center namespace to green-mesh kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: "*" 1.18.11.1. Creating an ExportedServiceSet You create an ExportedServiceSet resource to explicitly declare the services that you want to be available to a mesh peer. Services are exported as <export-name>.<export-namespace>.svc.<ServiceMeshPeer.name>-exports.local and will automatically route to the target service. This is the name by which the exported service is known in the exporting mesh. When the ingress gateway receives a request destined for this name, it will be routed to the actual service being exported. For example, if a service named ratings.red-mesh-bookinfo is exported to green-mesh as ratings.bookinfo , the service will be exported under the name ratings.bookinfo.svc.green-mesh-exports.local , and traffic received by the ingress gateway for that hostname will be routed to the ratings.red-mesh-bookinfo service. Prerequisites The cluster and ServiceMeshControlPlane have been configured for mesh federation. An account with the cluster-admin role. Note You can configure services for export even if they don't exist yet. When a service that matches the value specified in the ExportedServiceSet is deployed, it will be automatically exported. Procedure from the CLI Follow this procedure to create an ExportedServiceSet from the command line. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane; for example, red-mesh-system . USD oc project red-mesh-system Create an ExportedServiceSet file based on the following example where red-mesh is exporting services to green-mesh . Example ExportedServiceSet resource from red-mesh to green-mesh apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews Run the following command to upload and create the ExportedServiceSet resource in the red-mesh-system namespace. USD oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml> For example: USD oc create -n red-mesh-system -f export-to-green-mesh.yaml Create additional ExportedServiceSets as needed for each mesh peer in your federated mesh. To validate the services you've exported from red-mesh to share with green-mesh , run the following command: USD oc get exportedserviceset <PeerMeshExportedTo> -o yaml For example: USD oc get exportedserviceset green-mesh -o yaml Run the following command to validate the services the red-mesh exports to share with green-mesh: USD oc get exportedserviceset <PeerMeshExportedTo> -o yaml For example: USD oc -n red-mesh-system get exportedserviceset green-mesh -o yaml Example validating the services exported from the red mesh that are shared with the green mesh. status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo The status.exportedServices array lists the services that are currently exported (these services matched the export rules in the ExportedServiceSet object ). Each entry in the array indicates the name of the exported service and details about the local service that is exported. If a service that you expected to be exported is missing, confirm the Service object exists, its name or labels match the exportRules defined in the ExportedServiceSet object, and that the Service object's namespace is configured as a member of the service mesh using the ServiceMeshMemberRoll or ServiceMeshMember object. 1.18.12. Importing a service into a federated mesh Importing services lets you explicitly specify which services exported from another mesh should be accessible within your service mesh. You use an ImportedServiceSet resource to select services for import. Only services exported by a mesh peer and explicitly imported are available to the mesh. Services that you do not explicitly import are not made available within the mesh. You can select services by namespace or name. You can use wildcards to select services, for example, to import all the services that were exported to the namespace. You can select services for export using a label selector, which may be global to the mesh, or scoped to a specific member namespace. You can import services using an alias. For example, you can import the custom-ns/bar service as other-mesh/bar . You can specify a custom domain suffix, which will be appended to the name.namespace of an imported service for its fully qualified domain name; for example, bar.other-mesh.imported.local . The following example is for the green-mesh importing a service that was exported by red-mesh . Example ImportedServiceSet kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings Table 1.11. ImportedServiceSet parameters Parameter Description Values Name of the ServiceMeshPeer that exported the service to the federated mesh. Name of the namespace containing the ServiceMeshPeer resource (the mesh system namespace). Type of rule that will govern the import for the service. The first matching rule found for the service will be used for the import. NameSelector To create a NameSelector rule, specify the namespace and the name of the exported service. Set to true to aggregate remote endpoint with local services. When true , services will be imported as <name>.<namespace>.svc.cluster.local true / false To create a NameSelector rule that uses an alias for the service, after specifying the namespace and name for the service, then specify the alias for the namespace and the alias to be used for name of the service. Import the "bookinfo/ratings" service from the red-mesh into blue-mesh kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings Import all services from the red-mesh's west-data-center namespace into the green-mesh. These services will be accessible as <name>.west-data-center.svc.red-mesh-imports.local kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: "*" 1.18.12.1. Creating an ImportedServiceSet You create an ImportedServiceSet resource to explicitly declare the services that you want to import into your mesh. Services are imported with the name <exported-name>.<exported-namespace>.svc.<ServiceMeshPeer.name>.remote which is a "hidden" service, visible only within the egress gateway namespace and is associated with the exported service's hostname. The service will be available locally as <export-name>.<export-namespace>.<domainSuffix> , where domainSuffix is svc.<ServiceMeshPeer.name>-imports.local by default, unless importAsLocal is set to true , in which case domainSuffix is svc.cluster.local . If importAsLocal is set to false , the domain suffix in the import rule will be applied. You can treat the local import just like any other service in the mesh. It automatically routes through the egress gateway, where it is redirected to the exported service's remote name. Prerequisites The cluster and ServiceMeshControlPlane have been configured for mesh federation. An account with the cluster-admin role. Note You can configure services for import even if they haven't been exported yet. When a service that matches the value specified in the ImportedServiceSet is deployed and exported, it will be automatically imported. Procedure from the CLI Follow this procedure to create an ImportedServiceSet from the command line. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane; for example, green-mesh-system . USD oc project green-mesh-system Create an ImportedServiceSet file based on the following example where green-mesh is importing services previously exported by red-mesh . Example ImportedServiceSet resource from red-mesh to green-mesh kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings Run the following command to upload and create the ImportedServiceSet resource in the green-mesh-system namespace. USD oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml> For example: USD oc create -n green-mesh-system -f import-from-red-mesh.yaml Create additional ImportedServiceSet resources as needed for each mesh peer in your federated mesh. To validate the services you've imported into green-mesh , run the following command: USD oc get importedserviceset <PeerMeshImportedInto> -o yaml For example: USD oc get importedserviceset green-mesh -o yaml Run the following command to validate the services imported into a mesh. USD oc get importedserviceset <PeerMeshImportedInto> -o yaml Example validating that the services exported from the red mesh have been imported into the green mesh using the status section of the importedserviceset/red-mesh' object in the 'green-mesh-system namespace: USD oc -n green-mesh-system get importedserviceset/red-mesh -o yaml status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: "" name: "" namespace: "" In the preceding example only the ratings service is imported, as indicated by the populated fields under localService . The reviews service is available for import, but isn't currently imported because it does not match any importRules in the ImportedServiceSet object. 1.18.13. Configuring a federated mesh for failover Failover is the ability to switch automatically and seamlessly to a reliable backup system, for example another server. In the case of a federated mesh, you can configure a service in one mesh to failover to a service in another mesh. You configure Federation for failover by setting the importAsLocal and locality settings in an ImportedServiceSet resource and then configuring a DestinationRule that configures failover for the service to the locality specified in the ImportedServiceSet . Prerequisites Two or more OpenShift Container Platform 4.6 or above clusters already networked and federated. ExportedServiceSet resources already created for each mesh peer in the federated mesh. ImportedServiceSet resources already created for each mesh peer in the federated mesh. An account with the cluster-admin role. 1.18.13.1. Configuring an ImportedServiceSet for failover Locality-weighted load balancing allows administrators to control the distribution of traffic to endpoints based on the localities of where the traffic originates and where it will terminate. These localities are specified using arbitrary labels that designate a hierarchy of localities in {region}/{zone}/{sub-zone} form. In the examples in this section, the green-mesh is located in the us-east region, and the red-mesh is located in the us-west region. Example ImportedServiceSet resource from red-mesh to green-mesh kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west Table 1.12. ImportedServiceLocality fields table Name Description Type region: Region within which imported services are located. string subzone: Subzone within which imported services are located. I Subzone is specified, Zone must also be specified. string zone: Zone within which imported services are located. If Zone is specified, Region must also be specified. string Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role, enter the following command: USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane, enter the following command: USD oc project <smcp-system> For example, green-mesh-system . USD oc project green-mesh-system Edit the ImportedServiceSet file, where <ImportedServiceSet.yaml> includes a full path to the file you want to edit, enter the following command: USD oc edit -n <smcp-system> -f <ImportedServiceSet.yaml> For example, if you want to modify the file that imports from the red-mesh-system to the green-mesh-system as shown in the ImportedServiceSet example. USD oc edit -n green-mesh-system -f import-from-red-mesh.yaml Modify the file: Set spec.importRules.importAsLocal to true . Set spec.locality to a region , zone , or subzone . Save your changes. 1.18.13.2. Configuring a DestinationRule for failover Create a DestinationRule resource that configures the following: Outlier detection for the service. This is required in order for failover to function properly. In particular, it configures the sidecar proxies to know when endpoints for a service are unhealthy, eventually triggering a failover to the locality. Failover policy between regions. This ensures that failover beyond a region boundary will behave predictably. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane. USD oc project <smcp-system> For example, green-mesh-system . USD oc project green-mesh-system Create a DestinationRule file based on the following example where if green-mesh is unavailable, the traffic should be routed from the green-mesh in the us-east region to the red-mesh in us-west . Example DestinationRule apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: "ratings.bookinfo.svc.cluster.local" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m Deploy the DestinationRule , where <DestinationRule> includes the full path to your file, enter the following command: USD oc create -n <application namespace> -f <DestinationRule.yaml> For example: USD oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml 1.18.14. Removing a service from the federated mesh If you need to remove a service from the federated mesh, for example if it has become obsolete or has been replaced by a different service, you can do so. 1.18.14.1. To remove a service from a single mesh Remove the entry for the service from the ImportedServiceSet resource for the mesh peer that no longer should access the service. 1.18.14.2. To remove a service from the entire federated mesh Remove the entry for the service from the ExportedServiceSet resource for the mesh that owns the service. 1.18.15. Removing a mesh from the federated mesh If you need to remove a mesh from the federation, you can do so. Edit the removed mesh's ServiceMeshControlPlane resource to remove all federation ingress gateways for peer meshes. For each mesh peer that the removed mesh has been federated with: Remove the ServiceMeshPeer resource that links the two meshes. Edit the peer mesh's ServiceMeshControlPlane resource to remove the egress gateway that serves the removed mesh. 1.19. Extensions You can use WebAssembly extensions to add new features directly into the Red Hat OpenShift Service Mesh proxies. This lets you move even more common functionality out of your applications, and implement them in a single language that compiles to WebAssembly bytecode. Note WebAssembly extensions are not supported on IBM Z and IBM Power Systems. 1.19.1. WebAssembly modules overview WebAssembly modules can be run on many platforms, including proxies, and have broad language support, fast execution, and a sandboxed-by-default security model. Red Hat OpenShift Service Mesh extensions are Envoy HTTP Filters , giving them a wide range of capabilities: Manipulating the body and headers of requests and responses. Out-of-band HTTP requests to services not in the request path, such as authentication or policy checking. Side-channel data storage and queues for filters to communicate with each other. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API was deprecated in Red Hat OpenShift Service Mesh version 2.2 and was removed in Red Hat OpenShift Service Mesh version 2.3. There are two parts to writing a Red Hat OpenShift Service Mesh extension: You must write your extension using an SDK that exposes the proxy-wasm API and compile it to a WebAssembly module. You must then package the module into a container. Supported languages You can use any language that compiles to WebAssembly bytecode to write a Red Hat OpenShift Service Mesh extension, but the following languages have existing SDKs that expose the proxy-wasm API so that it can be consumed directly. Table 1.13. Supported languages Language Maintainer Repository AssemblyScript solo.io solo-io/proxy-runtime C++ proxy-wasm team (Istio Community) proxy-wasm/proxy-wasm-cpp-sdk Go tetrate.io tetratelabs/proxy-wasm-go-sdk Rust proxy-wasm team (Istio Community) proxy-wasm/proxy-wasm-rust-sdk 1.19.2. WasmPlugin container format Istio supports Open Container Initiative (OCI) images in its Wasm Plugin mechanism. You can distribute your Wasm Plugins as a container image, and you can use the spec.url field to refer to a container registry location. For example, quay.io/my-username/my-plugin:latest . Because each execution environment (runtime) for a WASM module can have runtime-specific configuration parameters, a WASM image can be composed of two layers: plugin.wasm (Required) - Content layer. This layer consists of a .wasm binary containing the bytecode of your WebAssembly module, to be loaded by the runtime. You must name this file plugin.wasm . runtime-config.json (Optional) - Configuration layer. This layer consists of a JSON-formatted string that describes metadata about the module for the target runtime. The config layer might also contain additional data, depending on the target runtime. For example, the config for a WASM Envoy Filter contains root_ids available on the filter. 1.19.3. WasmPlugin API reference The WasmPlugins API provides a mechanism to extend the functionality provided by the Istio proxy through WebAssembly filters. You can deploy multiple WasmPlugins. The phase and priority settings determine the order of execution (as part of Envoy's filter chain), allowing the configuration of complex interactions between user-supplied WasmPlugins and Istio's internal filters. In the following example, an authentication filter implements an OpenID flow and populates the Authorization header with a JSON Web Token (JWT). Istio authentication consumes this token and deploys it to the ingress gateway. The WasmPlugin file lives in the proxy sidecar filesystem. Note the field url . apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress Below is the same example, but this time an Open Container Initiative (OCI) image is used instead of a file in the filesystem. Note the fields url , imagePullPolicy , and imagePullSecret . apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress Table 1.14. WasmPlugin Field Reference Field Type Description Required spec.selector WorkloadSelector Criteria used to select the specific set of pods/VMs on which this plugin configuration should be applied. If omitted, this configuration will be applied to all workload instances in the same namespace. If the WasmPlugin field is present in the config root namespace, it will be applied to all applicable workloads in any namespace. No spec.url string URL of a Wasm module or OCI container. If no scheme is present, defaults to oci:// , referencing an OCI image. Other valid schemes are file:// for referencing .wasm module files present locally within the proxy container, and http[s]:// for .wasm module files hosted remotely. No spec.sha256 string SHA256 checksum that will be used to verify the Wasm module or OCI container. If the url field already references a SHA256 (using the @sha256: notation), it must match the value of this field. If an OCI image is referenced by tag and this field is set, its checksum will be verified against the contents of this field after pulling. No spec.imagePullPolicy PullPolicy The pull behavior to be applied when fetching an OCI image. Only relevant when images are referenced by tag instead of SHA. Defaults to the value IfNotPresent , except when an OCI image is referenced in the url field and the latest tag is used, in which case the value Always is the default, mirroring K8s behavior. Setting is ignored if the url field is referencing a Wasm module directly using file:// or http[s]:// . No spec.imagePullSecret string Credentials to use for OCI image pulling. The name of a secret in the same namespace as the WasmPlugin object that contains a pull secret for authenticating against the registry when pulling the image. No spec.phase PluginPhase Determines where in the filter chain this WasmPlugin object is injected. No spec.priority int64 Determines the ordering of WasmPlugins objects that have the same phase value. When multiple WasmPlugins objects are applied to the same workload in the same phase, they will be applied by priority and in descending order. If the priority field is not set, or two WasmPlugins objects with the same value, the ordering will be determined from the name and namespace of the WasmPlugins objects. Defaults to the value 0 . No spec.pluginName string The plugin name used in the Envoy configuration. Some Wasm modules might require this value to select the Wasm plugin to execute. No spec.pluginConfig Struct The configuration that will be passed on to the plugin. No spec.pluginConfig.verificationKey string The public key used to verify signatures of signed OCI images or Wasm modules. Must be supplied in PEM format. No The WorkloadSelector object specifies the criteria used to determine if a filter can be applied to a proxy. The matching criteria includes the metadata associated with a proxy, workload instance information such as labels attached to the pod/VM, or any other information that the proxy provides to Istio during the initial handshake. If multiple conditions are specified, all conditions need to match in order for the workload instance to be selected. Currently, only label based selection mechanism is supported. Table 1.15. WorkloadSelector Field Type Description Required matchLabels map<string, string> One or more labels that indicate a specific set of pods/VMs on which a policy should be applied. The scope of label search is restricted to the configuration namespace in which the resource is present. Yes The PullPolicy object specifies the pull behavior to be applied when fetching an OCI image. Table 1.16. PullPolicy Value Description <empty> Defaults to the value IfNotPresent , except for OCI images with tag latest, for which the default will be the value Always . IfNotPresent If an existing version of the image has been pulled before, that will be used. If no version of the image is present locally, we will pull the latest version. Always Always pull the latest version of an image when applying this plugin. Struct represents a structured data value, consisting of fields which map to dynamically typed values. In some languages, Struct might be supported by a native representation. For example, in scripting languages like JavaScript a struct is represented as an object. Table 1.17. Struct Field Type Description fields map<string, Value> Map of dynamically typed values. PluginPhase specifies the phase in the filter chain where the plugin will be injected. Table 1.18. PluginPhase Field Description <empty> Control plane decides where to insert the plugin. This will generally be at the end of the filter chain, right before the Router. Do not specify PluginPhase if the plugin is independent of others. AUTHN Insert plugin before Istio authentication filters. AUTHZ Insert plugin before Istio authorization filters and after Istio authentication filters. STATS Insert plugin before Istio stats filters and after Istio authorization filters. 1.19.3.1. Deploying WasmPlugin resources You can enable Red Hat OpenShift Service Mesh extensions using the WasmPlugin resource. In this example, istio-system is the name of the Service Mesh control plane project. The following example creates an openid-connect filter that performs an OpenID Connect flow to authenticate the user. Procedure Create the following example resource: Example plugin.yaml apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress Apply your plugin.yaml file with the following command: USD oc apply -f plugin.yaml 1.19.4. ServiceMeshExtension container format You must have a .wasm file containing the bytecode of your WebAssembly module, and a manifest.yaml file in the root of the container filesystem to make your container image a valid extension image. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API was deprecated in Red Hat OpenShift Service Mesh version 2.2 and was removed in Red Hat OpenShift Service Mesh version 2.3. manifest.yaml schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm Table 1.19. Field Reference for manifest.yml Field Description Required schemaVersion Used for versioning of the manifest schema. Currently the only possible value is 1 . This is a required field. name The name of your extension. This field is just metadata and currently unused. description The description of your extension. This field is just metadata and currently unused. version The version of your extension. This field is just metadata and currently unused. phase The default execution phase of your extension. This is a required field. priority The default priority of your extension. This is a required field. module The relative path from the container filesystem's root to your WebAssembly module. This is a required field. 1.19.5. ServiceMeshExtension reference The ServiceMeshExtension API provides a mechanism to extend the functionality provided by the Istio proxy through WebAssembly filters. There are two parts to writing a WebAssembly extension: Write your extension using an SDK that exposes the proxy-wasm API and compile it to a WebAssembly module. Package it into a container. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API, which was deprecated in Red Hat OpenShift Service Mesh version 2.2, was removed in Red Hat OpenShift Service Mesh version 2.3. Table 1.20. ServiceMeshExtension Field Reference Field Description metadata.namespace The metadata.namespace field of a ServiceMeshExtension source has a special semantic: if it equals the Control Plane Namespace, the extension will be applied to all workloads in the Service Mesh that match its workloadSelector value. When deployed to any other Mesh Namespace, it will only be applied to workloads in that same Namespace. spec.workloadSelector The spec.workloadSelector field has the same semantic as the spec.selector field of the Istio Gateway resource . It will match a workload based on its Pod labels. If no workloadSelector value is specified, the extension will be applied to all workloads in the namespace. spec.config This is a structured field that will be handed over to the extension, with the semantics dependent on the extension you are deploying. spec.image A container image URI pointing to the image that holds the extension. spec.phase The phase determines where in the filter chain the extension is injected, in relation to existing Istio functionality like Authentication, Authorization and metrics generation. Valid values are: PreAuthN, PostAuthN, PreAuthZ, PostAuthZ, PreStats, PostStats. This field defaults to the value set in the manifest.yaml file of the extension, but can be overwritten by the user. spec.priority If multiple extensions with the same spec.phase value are applied to the same workload instance, the spec.priority value determines the ordering of execution. Extensions with higher priority will be executed first. This allows for inter-dependent extensions. This field defaults to the value set in the manifest.yaml file of the extension, but can be overwritten by the user. 1.19.5.1. Deploying ServiceMeshExtension resources You can enable Red Hat OpenShift Service Mesh extensions using the ServiceMeshExtension resource. In this example, istio-system is the name of the Service Mesh control plane project. Note When creating new WebAssembly extensions, use the WasmPlugin API. The ServiceMeshExtension API was deprecated in Red Hat OpenShift Service Mesh version 2.2 and removed in Red Hat OpenShift Service Mesh version 2.3. For a complete example that was built using the Rust SDK, take a look at the header-append-filter . It is a simple filter that appends one or more headers to the HTTP responses, with their names and values taken out from the config field of the extension. See a sample configuration in the snippet below. Procedure Create the following example resource: Example ServiceMeshExtension resource extension.yaml apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100 Apply your extension.yaml file with the following command: USD oc apply -f <extension>.yaml 1.19.6. Migrating from ServiceMeshExtension to WasmPlugin resources The ServiceMeshExtension API, which was deprecated in Red Hat OpenShift Service Mesh version 2.2, was removed in Red Hat OpenShift Service Mesh version 2.3. If you are using the ServiceMeshExtension API, you must migrate to the WasmPlugin API to continue using your WebAssembly extensions. The APIs are very similar. The migration consists of two steps: Renaming your plugin file and updating the module packaging. Creating a WasmPlugin resource that references the updated container image. 1.19.6.1. API changes The new WasmPlugin API is similar to the ServiceMeshExtension , but with a few differences, especially in the field names: Table 1.21. Field changes between ServiceMeshExtensions and WasmPlugin ServiceMeshExtension WasmPlugin spec.config spec.pluginConfig spec.workloadSelector spec.selector spec.image spec.url spec.phase valid values: PreAuthN, PostAuthN, PreAuthZ, PostAuthZ, PreStats, PostStats spec.phase valid values: <empty>, AUTHN, AUTHZ, STATS The following is an example of how a ServiceMeshExtension resource could be converted into a WasmPlugin resource. ServiceMeshExtension resource apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100 New WasmPlugin resource equivalent to the ServiceMeshExtension above apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value 1.19.6.2. Container image format changes The new WasmPlugin container image format is similar to the ServiceMeshExtensions , with the following differences: The ServiceMeshExtension container format required a metadata file named manifest.yaml in the root directory of the container filesystem. The WasmPlugin container format does not require a manifest.yaml file. The .wasm file (the actual plugin) that previously could have any filename now must be named plugin.wasm and must be located in the root directory of the container filesystem. 1.19.6.3. Migrating to WasmPlugin resources To upgrade your WebAssembly extensions from the ServiceMeshExtension API to the WasmPlugin API, you rename your plugin file. Prerequisites ServiceMeshControlPlane is upgraded to version 2.2 or later. Procedure Update your container image. If the plugin is already in /plugin.wasm inside the container, skip to the step. If not: Ensure the plugin file is named plugin.wasm . You must name the extension file plugin.wasm . Ensure the plugin file is located in the root (/) directory. You must store extension files in the root of the container filesystem.. Rebuild your container image and push it to a container registry. Remove the ServiceMeshExtension resource and create a WasmPlugin resource that refers to the new container image you built. 1.20. Using the 3scale WebAssembly module Note The threescale-wasm-auth module runs on integrations of 3scale API Management 2.11 or later with Red Hat OpenShift Service Mesh 2.1.0 or later. The threescale-wasm-auth module is a WebAssembly module that uses a set of interfaces, known as an application binary interfaces ( ABI ). This is defined by the Proxy-WASM specification to drive any piece of software that implements the ABI so it can authorize HTTP requests against 3scale. As an ABI specification, Proxy-WASM defines the interaction between a piece of software named host and another named module , program , or extension . The host exposes a set of services used by the module to perform a task, and in this case, to process proxy requests. The host environment is composed of a WebAssembly virtual machine interacting with a piece of software, in this case, an HTTP proxy. The module itself runs in isolation to the outside world except for the instructions it runs on the virtual machine and the ABI specified by Proxy-WASM. This is a safe way to provide extension points to software: the extension can only interact in well-defined ways with the virtual machine and the host. The interaction provides a computing model and a connection to the outside world the proxy is meant to have. 1.20.1. Compatibility The threescale-wasm-auth module is designed to be fully compatible with all implementations of the Proxy-WASM ABI specification. At this point, however, it has only been thoroughly tested to work with the Envoy reverse proxy. 1.20.2. Usage as a stand-alone module Because of its self-contained design, it is possible to configure this module to work with Proxy-WASM proxies independently of Service Mesh, as well as 3scale Istio adapter deployments. 1.20.3. Prerequisites The module works with all supported 3scale releases, except when configuring a service to use OpenID connect (OIDC) , which requires 3scale 2.11 or later. 1.20.4. Configuring the threescale-wasm-auth module Cluster administrators on OpenShift Container Platform can configure the threescale-wasm-auth module to authorize HTTP requests to 3scale API Management through an application binary interface (ABI). The ABI defines the interaction between host and the module, exposing the hosts services, and allows you to use the module to process proxy requests. 1.20.4.1. The WasmPlugin API extension Service Mesh provides a custom resource definition to specify and apply Proxy-WASM extensions to sidecar proxies, known as WasmPlugin . Service Mesh applies this custom resource to the set of workloads that require HTTP API management with 3scale. See custom resource definition for more information. Note Configuring the WebAssembly extension is currently a manual process. Support for fetching the configuration for services from the 3scale system will be available in a future release. Prerequisites Identify a Kubernetes workload and namespace on your Service Mesh deployment that you will apply this module. You must have a 3scale tenant account. See SaaS or 3scale 2.11 On-Premises with a matching service and relevant applications and metrics defined. If you apply the module to the <product_page> microservice in the bookinfo namespace, see the Bookinfo sample application . The following example is the YAML format for the custom resource for threescale-wasm-auth module. This example refers to the upstream Maistra version of Service Mesh, WasmPlugin API. You must declare the namespace where the threescale-wasm-auth module is deployed, alongside a selector to identify the set of applications the module will apply to: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100 1 The namespace . 2 The selector . The spec.pluginConfig field depends on the module configuration and it is not populated in the example. Instead, the example uses the <yaml_configuration> placeholder value. You can use the format of this custom resource example. The spec.pluginConfig field varies depending on the application. All other fields persist across multiple instances of this custom resource. As examples: url : Only changes when newer versions of the module are deployed. phase : Remains the same, since this module needs to be invoked after the proxy has done any local authorization, such as validating OpenID Connect (OIDC) tokens. After you have the module configuration in spec.pluginConfig and the rest of the custom resource, apply it with the oc apply command: USD oc apply -f threescale-wasm-auth-bookinfo.yaml Additional resources Migrating from ServiceMeshExtension to WasmPlugin resources Custom Resources 1.20.5. Applying 3scale external ServiceEntry objects To have the threescale-wasm-auth module authorize requests against 3scale, the module must have access to 3scale services. You can do this within Red Hat OpenShift Service Mesh by applying an external ServiceEntry object and a corresponding DestinationRule object for TLS configuration to use the HTTPS protocol. The custom resources (CRs) set up the service entries and destination rules for secure access from within Service Mesh to 3scale Hosted (SaaS) for the backend and system components of the Service Management API and the Account Management API. The Service Management API receives queries for the authorization status of each request. The Account Management API provides API management configuration settings for your services. Procedure Apply the following external ServiceEntry CR and related DestinationRule CR for 3scale Hosted backend to your cluster: Add the ServiceEntry CR to a file called service-entry-threescale-saas-backend.yml : ServiceEntry CR apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Add the DestinationRule CR to a file called destination-rule-threescale-saas-backend.yml : DestinationRule CR apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net Apply and save the external ServiceEntry CR for the 3scale Hosted backend to your cluster, by running the following command: USD oc apply -f service-entry-threescale-saas-backend.yml Apply and save the external DestinationRule CR for the 3scale Hosted backend to your cluster, by running the following command: USD oc apply -f destination-rule-threescale-saas-backend.yml Apply the following external ServiceEntry CR and related DestinationRule CR for 3scale Hosted system to your cluster: Add the ServiceEntry CR to a file called service-entry-threescale-saas-system.yml : ServiceEntry CR apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Add the DestinationRule CR to a file called destination-rule-threescale-saas-system.yml : DestinationRule CR apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net Apply and save the external ServiceEntry CR for the 3scale Hosted system to your cluster, by running the following command: USD oc apply -f service-entry-threescale-saas-system.yml Apply and save the external DestinationRule CR for the 3scale Hosted system to your cluster, by running the following command: USD oc apply -f <destination-rule-threescale-saas-system.yml> Alternatively, you can deploy an in-mesh 3scale service. To deploy an in-mesh 3scale service, change the location of the services in the CR by deploying 3scale and linking to the deployment. Additional resources Service entry and destination rule documentation 1.20.6. The 3scale WebAssembly module configuration The WasmPlugin custom resource spec provides the configuration that the Proxy-WASM module reads from. The spec is embedded in the host and read by the Proxy-WASM module. Typically, the configurations are in the JSON file format for the modules to parse, however the WasmPlugin resource can interpret the spec value as YAML and convert it to JSON for consumption by the module. If you use the Proxy-WASM module in stand-alone mode, you must write the configuration using the JSON format. Using the JSON format means using escaping and quoting where needed within the host configuration files, for example Envoy . When you use the WebAssembly module with the WasmPlugin resource, the configuration is in the YAML format. In this case, an invalid configuration forces the module to show diagnostics based on its JSON representation to a sidecar's logging stream. Important The EnvoyFilter custom resource is not a supported API, although it can be used in some 3scale Istio adapter or Service Mesh releases. Using the EnvoyFilter custom resource is not recommended. Use the WasmPlugin API instead of the EnvoyFilter custom resource. If you must use the EnvoyFilter custom resource, you must specify the spec in JSON format. 1.20.6.1. Configuring the 3scale WebAssembly module The architecture of the 3scale WebAssembly module configuration depends on the 3scale account and authorization service, and the list of services to handle. Prerequisites The prerequisites are a set of minimum mandatory fields in all cases: For the 3scale account and authorization service: the backend-listener URL. For the list of services to handle: the service IDs and at least one credential look up method and where to find it. You will find examples for dealing with userkey , appid with appkey , and OpenID Connect (OIDC) patterns. The WebAssembly module uses the settings you specified in the static configuration. For example, if you add a mapping rule configuration to the module, it will always apply, even when the 3scale Admin Portal has no such mapping rule. The rest of the WasmPlugin resource exists around the spec.pluginConfig YAML entry. 1.20.6.2. The 3scale WebAssembly module api object The api top-level string from the 3scale WebAssembly module defines which version of the configuration the module will use. Note A non-existent or unsupported version of the api object renders the 3scale WebAssembly module inoperable. The api top-level string example apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1 ... The api entry defines the rest of the values for the configuration. The only accepted value is v1 . New settings that break compatibility with the current configuration or need more logic that modules using v1 cannot handle, will require different values. 1.20.6.3. The 3scale WebAssembly module system object The system top-level object specifies how to access the 3scale Account Management API for a specific account. The upstream field is the most important part of the object. The system object is optional, but recommended unless you are providing a fully static configuration for the 3scale WebAssembly module, which is an option if you do not want to provide connectivity to the system component of 3scale. When you provide static configuration objects in addition to the system object, the static ones always take precedence. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300 ... Table 1.22. system object fields Name Description Required name An identifier for the 3scale service, currently not referenced elsewhere. Optional upstream The details about a network host to be contacted. upstream refers to the 3scale Account Management API host known as system. Yes token A 3scale personal access token with read permissions. Yes ttl The minimum amount of seconds to consider a configuration retrieved from this host as valid before trying to fetch new changes. The default is 600 seconds (10 minutes). Note: there is no maximum amount, but the module will generally fetch any configuration within a reasonable amount of time after this TTL elapses. Optional 1.20.6.4. The 3scale WebAssembly module upstream object The upstream object describes an external host to which the proxy can perform calls. apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: "https://myaccount-admin.3scale.net/" timeout: 5000 ... Table 1.23. upstream object fields Name Description Required name name is not a free-form identifier. It is the identifier for the external host as defined by the proxy configuration. In the case of stand-alone Envoy configurations, it maps to the name of a Cluster , also known as upstream in other proxies. Note: the value of this field, because the Service Mesh and 3scale Istio adapter control plane configure the name according to a format using a vertical bar (|) as the separator of multiple fields. For the purposes of this integration, always use the format: outbound|<port>||<hostname> . Yes url The complete URL to access the described service. Unless implied by the scheme, you must include the TCP port. Yes Timeout Timeout in milliseconds so that connections to this service that take more than the amount of time to respond will be considered errors. Default is 1000 seconds. Optional 1.20.6.5. The 3scale WebAssembly module backend object The backend top-level object specifies how to access the 3scale Service Management API for authorizing and reporting HTTP requests. This service is provided by the Backend component of 3scale. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: ... backend: name: backend upstream: <object> ... Table 1.24. backend object fields Name Description Required name An identifier for the 3scale backend, currently not referenced elsewhere. Optional upstream The details about a network host to be contacted. This must refer to the 3scale Account Management API host, known, system. Yes. The most important and required field. 1.20.6.6. The 3scale WebAssembly module services object The services top-level object specifies which service identifiers are handled by this particular instance of the module . Since accounts have multiple services, you must specify which ones are handled. The rest of the configuration revolves around how to configure services. The services field is required. It is an array that must contain at least one service to be useful. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: ... services: - id: "2555417834789" token: service_token authorities: - "*.app" - 0.0.0.0 - "0.0.0.0:8443" credentials: <object> mapping_rules: <object> ... Each element in the services array represents a 3scale service. Table 1.25. services object fields Name Description Required ID An identifier for this 3scale service, currently not referenced elsewhere. Yes token This token can be found in the proxy configuration for your service in System or you can retrieve the it from System with following curl command: curl https://<system_host>/admin/api/services/<service_id>/proxy/configs/production/latest.json?access_token=<access_token>" | jq '.proxy_config.content.backend_authentication_value Optional authorities An array of strings, each one representing the Authority of a URL to match. These strings accept glob patterns supporting the asterisk ( * ), plus sign ( + ), and question mark ( ? ) matchers. Yes credentials An object defining which kind of credentials to look for and where. Yes mapping_rules An array of objects representing mapping rules and 3scale methods to hit. Optional 1.20.6.7. The 3scale WebAssembly module credentials object The credentials object is a component of the service object. credentials specifies which kind of credentials to be looked up and the steps to perform this action. All fields are optional, but you must specify at least one, user_key or app_id . The order in which you specify each credential is irrelevant because it is pre-established by the module. Only specify one instance of each credential. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: ... services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries> ... Table 1.26. credentials object fields Name Description Required user_key This is an array of lookup queries that defines a 3scale user key. A user key is commonly known as an API key. Optional app_id This is an array of lookup queries that define a 3scale application identifier. Application identifiers are provided by 3scale or by using an identity provider like Red Hat Single Sign-On (RH-SS0) , or OpenID Connect (OIDC). The resolution of the lookup queries specified here, whenever it is successful and resolves to two values, it sets up the app_id and the app_key . Optional app_key This is an array of lookup queries that define a 3scale application key. Application keys without a resolved app_id are useless, so only specify this field when app_id has been specified. Optional 1.20.6.8. The 3scale WebAssembly module lookup queries The lookup query object is part of any of the fields in the credentials object. It specifies how a given credential field should be found and processed. When evaluated, a successful resolution means that one or more values were found. A failed resolution means that no values were found. Arrays of lookup queries describe a short-circuit or relationship: a successful resolution of one of the queries stops the evaluation of any remaining queries and assigns the value or values to the specified credential-type. Each query in the array is independent of each other. A lookup query is made up of a single field, a source object, which can be one of a number of source types. See the following example: apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: ... services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> ... app_id: - <source_type>: <object> ... app_key: - <source_type>: <object> ... ... 1.20.6.9. The 3scale WebAssembly module source object A source object exists as part of an array of sources within any of the credentials object fields. The object field name, referred to as a source -type is any one of the following: header : The lookup query receives HTTP request headers as input. query_string : The lookup query receives the URL query string parameters as input. filter : The lookup query receives filter metadata as input. All source -type objects have at least the following two fields: Table 1.27. source-type object fields Name Description Required keys An array of strings, each one a key , referring to entries found in the input data. Yes ops An array of operations that perform a key entry match. The array is a pipeline where operations receive inputs and generate outputs on the operation. An operation failing to provide an output resolves the lookup query as failed. The pipeline order of the operations determines the evaluation order. Optional The filter field name has a required path entry to show the path in the metadata you use to look up data. When a key matches the input data, the rest of the keys are not evaluated and the source resolution algorithm jumps to executing the operations ( ops ) specified, if any. If no ops are specified, the result value of the matching key , if any, is returned. Operations provide a way to specify certain conditions and transformations for inputs you have after the first phase looks up a key . Use operations when you need to transform, decode, and assert properties, however they do not provide a mature language to deal with all needs and lack Turing-completeness . A stack stored the outputs of operations . When evaluated, the lookup query finishes by assigning the value or values at the bottom of the stack, depending on how many values the credential consumes. 1.20.6.10. The 3scale WebAssembly module operations object Each element in the ops array belonging to a specific source type is an operation object that either applies transformations to values or performs tests. The field name to use for such an object is the name of the operation itself, and any values are the parameters to the operation , which could be structure objects, for example, maps with fields and values, lists, or strings. Most operations attend to one or more inputs, and produce one or more outputs. When they consume inputs or produce outputs, they work with a stack of values: each value consumed by the operations is popped from the stack of values and initially populated with any source matches. The values outputted by them are pushed to the stack. Other operations do not consume or produce outputs other than asserting certain properties, but they inspect a stack of values. Note When resolution finishes, the values picked up by the step, such as assigning the values to be an app_id , app_key , or user_key , are taken from the bottom values of the stack. There are a few different operations categories: decode : These transform an input value by decoding it to get a different format. string : These take a string value as input and perform transformations and checks on it. stack : These take a set of values in the input and perform multiple stack transformations and selection of specific positions in the stack. check : These assert properties about sets of operations in a side-effect free way. control : These perform operations that allow for modifying the evaluation flow. format : These parse the format-specific structure of input values and look up values in it. All operations are specified by the name identifiers as strings. Additional resources Available operations 1.20.6.11. The 3scale WebAssembly module mapping_rules object The mapping_rules object is part of the service object. It specifies a set of REST path patterns and related 3scale metrics and count increments to use when the patterns match. You need the value if no dynamic configuration is provided in the system top-level object. If the object is provided in addition to the system top-level entry, then the mapping_rules object is evaluated first. mapping_rules is an array object. Each element of that array is a mapping_rule object. The evaluated matching mapping rules on an incoming request provide the set of 3scale methods for authorization and reporting to the APIManager . When multiple matching rules refer to the same methods , there is a summation of deltas when calling into 3scale. For example, if two rules increase the Hits method twice with deltas of 1 and 3, a single method entry for Hits reporting to 3scale has a delta of 4. 1.20.6.12. The 3scale WebAssembly module mapping_rule object The mapping_rule object is part of an array in the mapping_rules object. The mapping_rule object fields specify the following information: The HTTP request method to match. A pattern to match the path against. The 3scale methods to report along with the amount to report. The order in which you specify the fields determines the evaluation order. Table 1.28. mapping_rule object fields Name Description Required method Specifies a string representing an HTTP request method, also known as verb. Values accepted match the any one of the accepted HTTP method names, case-insensitive. A special value of any matches any method. Yes pattern The pattern to match the HTTP request's URI path component. This pattern follows the same syntax as documented by 3scale. It allows wildcards (use of the asterisk (*) character) using any sequence of characters between braces such as {this} . Yes usages A list of usage objects. When the rule matches, all methods with their deltas are added to the list of methods sent to 3scale for authorization and reporting. Embed the usages object with the following required fields: name : The method system name to report. delta : For how much to increase that method by. Yes last Whether the successful matching of this rule should stop the evaluation of more mapping rules. Optional Boolean. The default is false The following example is independent of existing hierarchies between methods in 3scale. That is, anything run on the 3scale side will not affect this. For example, the Hits metric might be a parent of them all, so it stores 4 hits due to the sum of all reported methods in the authorized request and calls the 3scale Authrep API endpoint. The example below uses a GET request to a path, /products/1/sold , that matches all the rules. mapping_rules GET request example apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: ... mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1 ... All usages get added to the request the module performs to 3scale with usage data as follows: Hits: 1 products: 2 sales: 1 1.20.7. The 3scale WebAssembly module examples for credentials use cases You will spend most of your time applying configuration steps to obtain credentials in the requests to your services. The following are credentials examples, which you can modify to adapt to specific use cases. You can combine them all, although when you specify multiple source objects with their own lookup queries , they are evaluated in order until one of them successfully resolves. 1.20.7.1. API key (user_key) in query string parameters The following example looks up a user_key in a query string parameter or header of the same name: credentials: user_key: - query_string: keys: - user_key - header: keys: - user_key 1.20.7.2. Application ID and key The following example looks up app_key and app_id credentials in a query or headers. credentials: app_id: - header: keys: - app_id - query_string: keys: - app_id app_key: - header: keys: - app_key - query_string: keys: - app_key 1.20.7.3. Authorization header A request includes an app_id and app_key in an authorization header. If there is at least one or two values outputted at the end, then you can assign the app_key . The resolution here assigns the app_key if there is one or two outputted at the end. The authorization header specifies a value with the type of authorization and its value is encoded as Base64 . This means you can split the value by a space character, take the second output and then split it again using a colon (:) as the separator. For example, if you use this format app_id:app_key , the header looks like the following example for credential : You must use lower case header field names as shown in the following example: credentials: app_id: - header: keys: - authorization ops: - split: separator: " " max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key The example use case looks at the headers for an authorization : It takes its string value and split it by a space, checking that it generates at least two values of a credential -type and the credential itself, then dropping the credential -type. It then decodes the second value containing the data it needs, and splits it by using a colon (:) character to have an operations stack including first the app_id , then the app_key , if it exists. If app_key does not exist in the authorization header then its specific sources are checked, for example, the header with the key app_key in this case. To add extra conditions to credentials , allow Basic authorizations, where app_id is either aladdin or admin , or any app_id being at least 8 characters in length. app_key must contain a value and have a minimum of 64 characters as shown in the following example: credentials: app_id: - header: keys: - authorization ops: - split: separator: " " max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin After picking up the authorization header value, you get a Basic credential -type by reversing the stack so that the type is placed on top. Run a glob match on it. When it validates, and the credential is decoded and split, you get the app_id at the bottom of the stack, and potentially the app_key at the top. Run a test: if there are two values in the stack, meaning an app_key was acquired. Ensure the string length is between 1 and 63, including app_id and app_key . If the key's length is zero, drop it and continue as if no key exists. If there was only an app_id and no app_key , the missing else branch indicates a successful test and evaluation continues. The last operation, assert , indicates that no side-effects make it into the stack. You can then modify the stack: Reverse the stack to have the app_id at the top. Whether or not an app_key is present, reversing the stack ensures app_id is at the top. Use and to preserve the contents of the stack across tests. Then use one of the following possibilities: Make sure app_id has a string length of at least 8. Make sure app_id matches either aladdin or admin . 1.20.7.4. OpenID Connect (OIDC) use case For Service Mesh and the 3scale Istio adapter, you must deploy a RequestAuthentication as shown in the following example, filling in your own workload data and jwtRules : apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs When you apply the RequestAuthentication , it configures Envoy with a native plugin to validate JWT tokens. The proxy validates everything before running the module so any requests that fail do not make it to the 3scale WebAssembly module. When a JWT token is validated, the proxy stores its contents in an internal metadata object, with an entry whose key depends on the specific configuration of the plugin. This use case gives you the ability to look up structure objects with a single entry containing an unknown key name. The 3scale app_id for OIDC matches the OAuth client_id . This is found in the azp or aud fields of JWT tokens. To get app_id field from Envoy's native JWT authentication filter, see the following example: credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - "0" keys: - azp - aud ops: - take: head: 1 The example instructs the module to use the filter source type to look up filter metadata for an object from the Envoy -specific JWT authentication native plugin. This plugin includes the JWT token as part of a structure object with a single entry and a pre-configured name. Use 0 to specify that you will only access the single entry. The resulting value is a structure for which you will resolve two fields: azp : The value where app_id is found. aud : The value where this information can also be found. The operation ensures only one value is held for assignment. 1.20.7.5. Picking up the JWT token from a header Some setups might have validation processes for JWT tokens where the validated token would reach this module via a header in JSON format. To get the app_id , see the following example: credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 1.20.8. 3scale WebAssembly module minimal working configuration The following is an example of a 3scale WebAssembly module minimal working configuration. You can copy and paste this and edit it to work with your own configuration. apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - "*" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key> 1.21. Using the 3scale Istio adapter The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh. Important You can only use the 3scale Istio adapter with Red Hat OpenShift Service Mesh versions 2.0 and below. The Mixer component was deprecated in release 2.0 and removed in release 2.1. For Red Hat OpenShift Service Mesh versions 2.1.0 and later you should use the 3scale WebAssembly module . If you want to enable 3scale backend cache with the 3scale Istio adapter, you must also enable Mixer policy and Mixer telemetry. See Deploying the Red Hat OpenShift Service Mesh control plane . 1.21.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh You can use these examples to configure requests to your services using the 3scale Istio Adapter. Prerequisites: Red Hat OpenShift Service Mesh version 2.x A working 3scale account ( SaaS or 3scale 2.9 On-Premises ) Enabling backend cache requires 3scale 2.9 or greater Red Hat OpenShift Service Mesh prerequisites Ensure Mixer policy enforcement is enabled. Update Mixer policy enforcement section provides instructions to check the current Mixer policy enforcement status and enable policy enforcement. Mixer policy and telemetry must be enabled if you are using a mixer plugin. You will need to properly configure the Service Mesh Control Plane (SMCP) when upgrading. Note To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file. Note Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids . Use a single handler per 3scale account by following the steps below: Procedure Create a handler for your 3scale account and specify your account credentials. Omit any service identifier. apiVersion: "config.istio.io/v1alpha2" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: "https://<organization>-admin.3scale.net/" access_token: "<ACCESS_TOKEN>" connection: address: "threescale-istio-adapter:3333" Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows: Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id . Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler. Rule configuration example apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: threescale spec: match: destination.labels["service-mesh.3scale.net"] == "true" actions: - handler: threescale.handler instances: - threescale-authorization.instance 1.21.1.1. Generating 3scale custom resources The adapter includes a tool that allows you to generate the handler , instance , and rule custom resources. Table 1.29. Usage Option Description Required Default value -h, --help Produces help output for available options No --name Unique name for this URL, token pair Yes -n, --namespace Namespace to generate templates No istio-system -t, --token 3scale access token Yes -u, --url 3scale Admin Portal URL Yes --backend-url 3scale backend URL. If set, it overrides the value that is read from system configuration No -s, --service 3scale API/Service ID No --auth 3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC) No Hybrid -o, --output File to save produced manifests to No Standard output --version Outputs the CLI version and exits immediately No 1.21.1.1.1. Generate templates from URL examples Note Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter . Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors. You can omit the --service if you use the annotations. This command must be invoked from within the container image via oc exec . Procedure Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler: The following example generates the templates with the service ID embedded in the handler: Additional resources Tokens . 1.21.1.2. Generating manifests from a deployed adapter Note NAME is an identifier you use to identify with the service you are managing with 3scale. The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool. Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information. Run this command to generate manifests from a deployed adapter in the istio-system namespace: This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways: Label the workload (recommended) Hard code the handler as service_id Update the workload with the required annotations: Note You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence . 1.21.1.3. Routing service traffic through the adapter Follow these steps to drive traffic for your service through the 3scale adapter. Prerequisites Credentials and service ID from your 3scale administrator. Procedure Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale , refers to the name of the generated handler. This handler stores the access token required to call 3scale. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time. 1.21.2. Configure the integration settings in 3scale Follow this procedure to configure the 3scale integration settings. Note For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program. Procedure Navigate to [your_API_name] Integration Click Settings . Select the Istio option under Deployment . The API Key (user_key) option under Authentication is selected by default. Click Update Product to save your selection. Click Configuration . Click Update Configuration . 1.21.3. Caching behavior Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value. Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value. By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry. 1.21.4. Authenticating requests This release supports the following authentication methods: Standard API Keys : single randomized strings or hashes acting as an identifier and a secret token. Application identifier and key pairs : immutable identifier and mutable secret key strings. OpenID authentication method : client ID string parsed from the JSON Web Token. 1.21.4.1. Applying authentication patterns Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from: Request headers Request parameters Both request headers and query parameters Note When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key , this must be referenced in the configuration as request.headers["user-key"] . 1.21.4.1.1. API key authentication method Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option. In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header. API key authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named "key", change request.query_params["user_key"] to request.query_params["key"] . 1.21.4.1.2. Application ID and application key pair authentication method Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option. In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed. Application ID and application key pair authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification , change request.query_params["app_id"] to request.query_params["identification"] . 1.21.4.1.3. OpenID authentication method To use the OpenID Connect (OIDC) authentication method , use the properties value on the subject field to set client_id , and optionally app_key . You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp . You can modify this as needed. OpenID authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request. In the sample RequestAuthentication defined below, replace issuer , jwksUri , and selector as appropriate. OpenID Policy example apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs 1.21.4.1.4. Hybrid authentication method You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key. In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers. Hybrid authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | properties: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" 1.21.5. 3scale Adapter metrics The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus. Note There are incompatible changes in the 3scale Istio Adapter metrics since the releases in Service Mesh 1.x. In Prometheus, metrics have been renamed with one addition for the backend cache, so that the following metrics exist as of Service Mesh 2.0: Table 1.30. Prometheus metrics Metric Type Description threescale_latency Histogram Request latency between adapter and 3scale. threescale_http_total Counter HTTP Status response codes for requests to 3scale backend. threescale_system_cache_hits Counter Total number of requests to the 3scale system fetched from the configuration cache. threescale_backend_cache_hits Counter Total number of requests to 3scale backend fetched from the backend cache. 1.21.6. 3scale backend cache The 3scale backend cache provides an authorization and reporting cache for clients of the 3scale Service Management API. This cache is embedded in the adapter to enable lower latencies in responses in certain situations assuming the administrator is willing to accept the trade-offs. Note 3scale backend cache is disabled by default. 3scale backend cache functionality trades inaccuracy in rate limiting and potential loss of hits since the last flush was performed for low latency and higher consumption of resources in the processor and memory. 1.21.6.1. Advantages of enabling backend cache The following are advantages to enabling the backend cache: Enable the backend cache when you find latencies are high while accessing services managed by the 3scale Istio Adapter. Enabling the backend cache will stop the adapter from continually checking with the 3scale API manager for request authorizations, which will lower the latency. This creates an in-memory cache of 3scale authorizations for the 3scale Istio Adapter to store and reuse before attempting to contact the 3scale API manager for authorizations. Authorizations will then take much less time to be granted or denied. Backend caching is useful in cases when you are hosting the 3scale API manager in another geographical location from the service mesh running the 3scale Istio Adapter. This is generally the case with the 3scale Hosted (SaaS) platform, but also if a user hosts their 3scale API manager in another cluster located in a different geographical location, in a different availability zone, or in any case where the network overhead to reach the 3scale API manager is noticeable. 1.21.6.2. Trade-offs for having lower latencies The following are trade-offs for having lower latencies: Each 3scale adapter's authorization state updates every time a flush happens. This means two or more instances of the adapter will introduce more inaccuracy between flushing periods. There is a greater chance of too many requests being granted that exceed limits and introduce erratic behavior, which leads to some requests going through and some not, depending on which adapter processes each request. An adapter cache that cannot flush its data and update its authorization information risks shut down or crashing without reporting its information to the API manager. A fail open or fail closed policy will be applied when an adapter cache cannot determine whether a request must be granted or denied, possibly due to network connectivity issues in contacting the API manager. When cache misses occur, typically right after booting the adapter or after a long period of no connectivity, latencies will grow in order to query the API manager. An adapter cache must do much more work on computing authorizations than it would without an enabled cache, which will tax processor resources. Memory requirements will grow proportionally to the combination of the amount of limits, applications, and services managed by the cache. 1.21.6.3. Backend cache configuration settings The following points explain the backend cache configuration settings: Find the settings to configure the backend cache in the 3scale configuration options. The last 3 settings control enabling of backend cache: PARAM_USE_CACHE_BACKEND - set to true to enable backend cache. PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS - sets time in seconds between consecutive attempts to flush cache data to the API manager. PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED - set whether or not to allow/open or deny/close requests to the services when there is not enough cached data and the 3scale API manager cannot be reached. 1.21.7. 3scale Istio Adapter APIcast emulation The 3scale Istio Adapter performs as APIcast would when the following conditions occur: When a request cannot match any mapping rule defined, the returned HTTP code is 404 Not Found. This was previously 403 Forbidden. When a request is denied because it goes over limits, the returned HTTP code is 429 Too Many Requests. This was previously 403 Forbidden. When generating default templates via the CLI, it will use underscores rather than dashes for the headers, for example: user_key rather than user-key . 1.21.8. 3scale Istio adapter verification You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem. Procedure Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace: USD oc get pods -n <istio-system> Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version: USD oc logs <istio-system> When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information. Additional resources Inspecting pod and container logs . 1.21.9. 3scale Istio adapter troubleshooting checklist As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation: Incorrect YAML indentation. Missing YAML sections. Forgot to apply the changes in the YAML to the cluster. Forgot to label the service workloads with the service-mesh.3scale.net/credentials key. Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account. The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix. The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist. Wrong access token or URL for the 3scale Admin Portal in the handler. The Instance custom resource's params/subject/properties section fails to list the right parameters for app_id , app_key , or client_id , either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing. Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it. 1.22. Troubleshooting your service mesh This section describes how to identify and resolve common problems in Red Hat OpenShift Service Mesh. Use the following sections to help troubleshoot and debug problems when deploying Red Hat OpenShift Service Mesh on OpenShift Container Platform. 1.22.1. Understanding Service Mesh versions In order to understand what version of Red Hat OpenShift Service Mesh you have deployed on your system, you need to understand how each of the component versions is managed. Operator version - The most current Operator version is 2.3.2. The Operator version number only indicates the version of the currently installed Operator. Because the Red Hat OpenShift Service Mesh Operator supports multiple versions of the Service Mesh control plane, the version of the Operator does not determine the version of your deployed ServiceMeshControlPlane resources. Important Upgrading to the latest Operator version automatically applies patch updates, but does not automatically upgrade your Service Mesh control plane to the latest minor version. ServiceMeshControlPlane version - The ServiceMeshControlPlane version determines what version of Red Hat OpenShift Service Mesh you are using. The value of the spec.version field in the ServiceMeshControlPlane resource controls the architecture and configuration settings that are used to install and deploy Red Hat OpenShift Service Mesh. When you create the Service Mesh control plane you can set the version in one of two ways: To configure in the Form View, select the version from the Control Plane Version menu. To configure in the YAML View, set the value for spec.version in the YAML file. Operator Lifecycle Manager (OLM) does not manage Service Mesh control plane upgrades, so the version number for your Operator and ServiceMeshControlPlane (SMCP) may not match, unless you have manually upgraded your SMCP. 1.22.2. Troubleshooting Operator installation In addition to the information in this section, be sure to review the following topics: What are Operators? Operator Lifecycle Management concepts . OpenShift Operator troubleshooting section . OpenShift installation troubleshooting section . 1.22.2.1. Validating Operator installation When you install the Red Hat OpenShift Service Mesh Operators, OpenShift automatically creates the following objects as part of a successful Operator installation: config maps custom resource definitions deployments pods replica sets roles role bindings secrets service accounts services From the OpenShift Container Platform console You can verify that the Operator pods are available and running by using the OpenShift Container Platform console. Navigate to Workloads Pods . Select the openshift-operators namespace. Verify that the following pods exist and have a status of running : istio-operator jaeger-operator kiali-operator Select the openshift-operators-redhat namespace. Verify that the elasticsearch-operator pod exists and has a status of running . From the command line Verify the Operator pods are available and running in the openshift-operators namespace with the following command: USD oc get pods -n openshift-operators Example output NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s Verify the Elasticsearch operator with the following command: USD oc get pods -n openshift-operators-redhat Example output NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s 1.22.2.2. Troubleshooting service mesh Operators If you experience Operator issues: Verify your Operator subscription status. Verify that you did not install a community version of the Operator, instead of the supported Red Hat version. Verify that you have the cluster-admin role to install Red Hat OpenShift Service Mesh. Check for any errors in the Operator pod logs if the issue is related to installation of Operators. Note You can install Operators only through the OpenShift console, the OperatorHub is not accessible from the command line. 1.22.2.2.1. Viewing Operator pod logs You can view Operator logs by using the oc logs command. Red Hat may request logs to help resolve support cases. Procedure To view Operator pod logs, enter the command: USD oc logs -n openshift-operators <podName> For example, USD oc logs -n openshift-operators istio-operator-bb49787db-zgr87 1.22.3. Troubleshooting the control plane The Service Mesh control plane is composed of Istiod, which consolidates several control plane components (Citadel, Galley, Pilot) into a single binary. Deploying the ServiceMeshControlPlane also creates the other components that make up Red Hat OpenShift Service Mesh as described in the architecture topic. 1.22.3.1. Validating the Service Mesh control plane installation When you create the Service Mesh control plane, the Service Mesh Operator uses the parameters that you have specified in the ServiceMeshControlPlane resource file to do the following: Creates the Istio components and deploys the following pods: istiod istio-ingressgateway istio-egressgateway grafana prometheus Calls the Kiali Operator to create Kaili deployment based on configuration in either the SMCP or the Kiali custom resource. Note You view the Kiali components under the Kiali Operator, not the Service Mesh Operator. Calls the Red Hat OpenShift distributed tracing platform Operator to create distributed tracing platform components based on configuration in either the SMCP or the Jaeger custom resource. Note You view the Jaeger components under the Red Hat OpenShift distributed tracing platform Operator and the Elasticsearch components under the Red Hat Elasticsearch Operator, not the Service Mesh Operator. From the OpenShift Container Platform console You can verify the Service Mesh control plane installation in the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select the <istio-system> namespace. Select the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane, for example basic . To view the resources created by the deployment, click the Resources tab. You can use the filter to narrow your view, for example, to check that all the Pods have a status of running . If the SMCP status indicates any problems, check the status: output in the YAML file for more information. Navigate back to Operators Installed Operators . Select the OpenShift Elasticsearch Operator. Click the Elasticsearch tab. Click the name of the deployment, for example elasticsearch . To view the resources created by the deployment, click the Resources tab. . If the Status column any problems, check the status: output on the YAML tab for more information. Navigate back to Operators Installed Operators . Select the Red Hat OpenShift distributed tracing platform Operator. Click the Jaeger tab. Click the name of your deployment, for example jaeger . To view the resources created by the deployment, click the Resources tab. If the Status column indicates any problems, check the status: output on the YAML tab for more information. Navigate to Operators Installed Operators . Select the Kiali Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your deployment, for example kiali . To view the resources created by the deployment, click the Resources tab. If the Status column any problems, check the status: output on the YAML tab for more information. From the command line Run the following command to see if the Service Mesh control plane pods are available and running, where istio-system is the namespace where you installed the SMCP. USD oc get pods -n istio-system Example output NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s Check the status of the Service Mesh control plane deployment by using the following command. Replace istio-system with the namespace where you deployed the SMCP. USD oc get smcp -n <istio-system> The installation has finished successfully when the STATUS column is ComponentsReady . Example output NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady ["default"] 2.1.3 4m2s If you have modified and redeployed your Service Mesh control plane, the status should read UpdateSuccessful . Example output NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h If the SMCP status indicates anything other than ComponentsReady check the status: output in the SCMP resource for more information. USD oc describe smcp <smcp-name> -n <controlplane-namespace> Example output USD oc describe smcp basic -n istio-system Check the status of the Jaeger deployment with the following command, where istio-system is the namespace where you deployed the SMCP. USD oc get jaeger -n <istio-system> Example output NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m Check the status of the Kiali deployment with the following command, where istio-system is the namespace where you deployed the SMCP. USD oc get kiali -n <istio-system> Example output NAME AGE kiali 15m 1.22.3.1.1. Accessing the Kiali console You can view your application's topology, health, and metrics in the Kiali console. If your service is experiencing problems, the Kiali console lets you view the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. Kiali also provides an interactive graph view of your namespace in real time. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed, Kiali installed and configured. The installation process creates a route to access the Kiali console. If you know the URL for the Kiali console, you can access it directly. If you do not know the URL, use the following directions. Procedure for administrators Log in to the OpenShift Container Platform web console with an administrator role. Click Home Projects . On the Projects page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project details page, in the Launcher section, click the Kiali link. Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation and namespaces have not yet been added to the mesh, there might not be any data to display other than istio-system . Procedure for developers Log in to the OpenShift Container Platform web console with a developer role. Click Project . On the Project Details page, if necessary, use the filter to find the name of your project. Click the name of your project, for example, bookinfo . On the Project page, in the Launcher section, click the Kiali link. Click Log In With OpenShift . 1.22.3.1.2. Accessing the Jaeger console To access the Jaeger console you must have Red Hat OpenShift Service Mesh installed, Red Hat OpenShift distributed tracing platform installed and configured. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from Kiali console Launch the Kiali console. Click Distributed Tracing in the left navigation pane. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, istio-system is the Service Mesh control plane namespace. USD export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 1.22.3.2. Troubleshooting the Service Mesh control plane If you are experiencing issues while deploying the Service Mesh control plane, Ensure that the ServiceMeshControlPlane resource is installed in a project that is separate from your services and Operators. This documentation uses the istio-system project as an example, but you can deploy your control plane in any project as long as it is separate from the project that contains your Operators and services. Ensure that the ServiceMeshControlPlane and Jaeger custom resources are deployed in the same project. For example, use the istio-system project for both. 1.22.4. Troubleshooting the data plane The data plane is a set of intelligent proxies that intercept and control all inbound and outbound network communications between services in the service mesh. Red Hat OpenShift Service Mesh relies on a proxy sidecar within the application's pod to provide service mesh capabilities to the application. 1.22.4.1. Troubleshooting sidecar injection Red Hat OpenShift Service Mesh does not automatically inject proxy sidecars to pods. You must opt in to sidecar injection. 1.22.4.1.1. Troubleshooting Istio sidecar injection Check to see if automatic injection is enabled in the Deployment for your application. If automatic injection for the Envoy proxy is enabled, there should be a sidecar.istio.io/inject:"true" annotation in the Deployment resource under spec.template.metadata.annotations . 1.22.4.1.2. Troubleshooting Jaeger agent sidecar injection Check to see if automatic injection is enabled in the Deployment for your application. If automatic injection for the Jaeger agent is enabled, there should be a sidecar.jaegertracing.io/inject:"true" annotation in the Deployment resource. For more information about sidecar injection, see Enabling automatic injection 1.23. Troubleshooting Envoy proxy The Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy also collects and reports telemetry on the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod. 1.23.1. Enabling Envoy access logs Envoy access logs are useful in diagnosing traffic failures and flows, and help with end-to-end traffic flow analysis. To enable access logging for all istio-proxy containers, edit the ServiceMeshControlPlane (SMCP) object to add a file name for the logging output. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Change to the project where you installed the Service Mesh control plane, for example istio-system . USD oc project istio-system Edit the ServiceMeshControlPlane file. USD oc edit smcp <smcp_name> As show in the following example, use name to specify the file name for the proxy log. If you do not specify a value for name , no log entries will be written. spec: proxy: accessLogging: file: name: /dev/stdout #file name For more information about troubleshooting pod issues, see Investigating pod issues 1.23.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.23.2.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. 1.23.2.2. Searching the Red Hat Knowledgebase In the event of an OpenShift Container Platform issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including: OpenShift Container Platform components (such as etcd ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click Search . Select the OpenShift Container Platform product filter. Select the Knowledgebase content type filter. 1.23.2.3. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... 1.23.2.4. About collecting service mesh data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Precedure To collect Red Hat OpenShift Service Mesh data with must-gather , you must specify the Red Hat OpenShift Service Mesh image. USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3 To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather , you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, replace <namespace> with your Service Mesh control plane namespace, such as istio-system . USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3 gather <namespace> For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh. 1.23.2.5. Submitting a support case Prerequisites You have installed the OpenShift CLI ( oc ). You have a Red Hat Customer Portal account. You have access to OpenShift Cluster Manager . Procedure Log in to the Red Hat Customer Portal and select SUPPORT CASES Open a case . Select the appropriate category for your issue (such as Defect / Bug ), product ( OpenShift Container Platform ), and product version ( 4.9 , if this is not already autofilled). Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue . Enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue . Ensure that the account information presented is as expected, and if not, amend accordingly. Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID. To manually obtain your cluster ID using the OpenShift Container Platform web console: Navigate to Home Dashboards Overview . Find the value in the Cluster ID field of the Details section. Alternatively, it is possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled. From the toolbar, navigate to (?) Help Open Support Case . The Cluster ID value is autofilled. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' Complete the following questions where prompted and then click Continue : Where are you experiencing the behavior? What environment? When does the behavior occur? Frequency? Repeatedly? At certain times? What information can you provide around time-frames and the business impact? Upload relevant diagnostic data files and click Continue . It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command. Input relevant case management details and click Continue . Preview the case details and click Submit . 1.24. Service Mesh control plane configuration reference You can customize your Red Hat OpenShift Service Mesh by modifying the default ServiceMeshControlPlane (SMCP) resource or by creating a completely custom SMCP resource. This reference section documents the configuration options available for the SMCP resource. 1.24.1. Service Mesh Control plane parameters The following table lists the top-level parameters for the ServiceMeshControlPlane resource. Table 1.31. ServiceMeshControlPlane resource parameters Name Description Type apiVersion APIVersion defines the versioned schema of this representation of an object. Servers convert recognized schemas to the latest internal value, and may reject unrecognized values. The value for the ServiceMeshControlPlane version 2.0 is maistra.io/v2 . The value for ServiceMeshControlPlane version 2.0 is maistra.io/v2 . kind Kind is a string value that represents the REST resource this object represents. ServiceMeshControlPlane is the only valid value for a ServiceMeshControlPlane. metadata Metadata about this ServiceMeshControlPlane instance. You can provide a name for your Service Mesh control plane installation to keep track of your work, for example, basic . string spec The specification of the desired state of this ServiceMeshControlPlane . This includes the configuration options for all components that comprise the Service Mesh control plane. For more information, see Table 2. status The current status of this ServiceMeshControlPlane and the components that comprise the Service Mesh control plane. For more information, see Table 3. The following table lists the specifications for the ServiceMeshControlPlane resource. Changing these parameters configures Red Hat OpenShift Service Mesh components. Table 1.32. ServiceMeshControlPlane resource spec Name Description Configurable parameters addons The addons parameter configures additional features beyond core Service Mesh control plane components, such as visualization, or metric storage. 3scale , grafana , jaeger , kiali , and prometheus . cluster The cluster parameter sets the general configuration of the cluster (cluster name, network name, multi-cluster, mesh expansion, etc.) meshExpansion , multiCluster , name , and network gateways You use the gateways parameter to configure ingress and egress gateways for the mesh. enabled , additionalEgress , additionalIngress , egress , ingress , and openshiftRoute general The general parameter represents general Service Mesh control plane configuration that does not fit anywhere else. logging and validationMessages policy You use the policy parameter to configure policy checking for the Service Mesh control plane. Policy checking can be enabled by setting spec.policy.enabled to true . mixer remote , or type . type can be set to Istiod , Mixer or None . profiles You select the ServiceMeshControlPlane profile to use for default values using the profiles parameter. default proxy You use the proxy parameter to configure the default behavior for sidecars. accessLogging , adminPort , concurrency , and envoyMetricsService runtime You use the runtime parameter to configure the Service Mesh control plane components. components , and defaults security The security parameter allows you to configure aspects of security for the Service Mesh control plane. certificateAuthority , controlPlane , identity , dataPlane and trust techPreview The techPreview parameter enables early access to features that are in technology preview. N/A telemetry If spec.mixer.telemetry.enabled is set to true , telemetry is enabled. mixer , remote , and type . type can be set to Istiod , Mixer or None . tracing You use the tracing parameter to enables distributed tracing for the mesh. sampling , type . type can be set to Jaeger or None . version You use the version parameter to specify what Maistra version of the Service Mesh control plane to install. When creating a ServiceMeshControlPlane with an empty version, the admission webhook sets the version to the current version. New ServiceMeshControlPlanes with an empty version are set to v2.0 . Existing ServiceMeshControlPlanes with an empty version keep their setting. string ControlPlaneStatus represents the current state of your service mesh. Table 1.33. ServiceMeshControlPlane resource ControlPlaneStatus Name Description Type annotations The annotations parameter stores additional, usually redundant status information, such as the number of components deployed by the ServiceMeshControlPlane . These statuses are used by the command line tool, oc , which does not yet allow counting objects in JSONPath expressions. Not configurable conditions Represents the latest available observations of the object's current state. Reconciled indicates whether the operator has finished reconciling the actual state of deployed components with the configuration in the ServiceMeshControlPlane resource. Installed indicates whether the Service Mesh control plane has been installed. Ready indicates whether all Service Mesh control plane components are ready. string components Shows the status of each deployed Service Mesh control plane component. string appliedSpec The resulting specification of the configuration options after all profiles have been applied. ControlPlaneSpec appliedValues The resulting values.yaml used to generate the charts. ControlPlaneSpec chartVersion The version of the charts that were last processed for this resource. string observedGeneration The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The status.conditions are not up-to-date if the status.observedGeneration field doesn't match metadata.generation . integer operatorVersion The version of the operator that last processed this resource. string readiness The readiness status of components & owned resources. string This example ServiceMeshControlPlane definition contains all of the supported parameters. Example ServiceMeshControlPlane resource apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: "" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {} 1.24.2. spec parameters 1.24.2.1. general parameters Here is an example that illustrates the spec.general parameters for the ServiceMeshControlPlane object and a description of the available parameters with appropriate values. Example general parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true Table 1.34. Istio general parameters Parameter Description Values Default value Use to configure logging for the Service Mesh control plane components. N/A Use to specify the component logging level. Possible values: trace , debug , info , warning , error , fatal , panic . N/A Possible values: trace , debug , info , warning , error , fatal , panic . N/A Use to enable or disable JSON logging. true / false N/A Use to enable or disable validation messages to the status fields of istio.io resources. This can be useful for detecting configuration errors in resources. true / false N/A 1.24.2.2. profiles parameters You can create reusable configurations with ServiceMeshControlPlane object profiles. If you do not configure the profile setting, Red Hat OpenShift Service Mesh uses the default profile. Here is an example that illustrates the spec.profiles parameter for the ServiceMeshControlPlane object: Example profiles parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName For information about creating profiles, see the Creating control plane profiles . For more detailed examples of security configuration, see Mutual Transport Layer Security (mTLS) . 1.24.2.3. techPreview parameters The spec.techPreview parameter enables early access to features that are in Technology Preview. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.24.2.4. tracing parameters The following example illustrates the spec.tracing parameters for the ServiceMeshControlPlane object, and a description of the available parameters with appropriate values. Example tracing parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger Table 1.35. Istio tracing parameters Parameter Description Values Default value The sampling rate determines how often the Envoy proxy generates a trace. You use the sampling rate to control what percentage of requests get reported to your tracing system. Integer values between 0 and 10000 representing increments of 0.01% (0 to 100%). For example, setting the value to 10 samples 0.1% of requests, setting the value to 100 will sample 1% of requests setting the value to 500 samples 5% of requests, and a setting of 10000 samples 100% of requests. 10000 (100% of traces) Currently the only tracing type that is supported is Jaeger . Jaeger is enabled by default. To disable tracing, set the type parameter to None . None , Jaeger Jaeger 1.24.2.5. version parameter The Red Hat OpenShift Service Mesh Operator supports installation of different versions of the ServiceMeshControlPlane . You use the version parameter to specify what version of the Service Mesh control plane to install. If you do not specify a version parameter when creating your SMCP, the Operator sets the value to the latest version: (2.3). Existing ServiceMeshControlPlane objects keep their version setting during upgrades of the Operator. 1.24.2.6. 3scale configuration The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource. Example 3scale parameters spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true Table 1.36. 3scale parameters Parameter Description Values Default value enabled Whether to use the 3scale adapter true / false false PARAM_THREESCALE_LISTEN_ADDR Sets the listen address for the gRPC server Valid port number 3333 PARAM_THREESCALE_LOG_LEVEL Sets the minimum log output level. debug , info , warn , error , or none info PARAM_THREESCALE_LOG_JSON Controls whether the log is formatted as JSON true / false true PARAM_THREESCALE_LOG_GRPC Controls whether the log contains gRPC info true / false true PARAM_THREESCALE_REPORT_METRICS Controls whether 3scale system and backend metrics are collected and reported to Prometheus true / false true PARAM_THREESCALE_METRICS_PORT Sets the port that the 3scale /metrics endpoint can be scrapped from Valid port number 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS Time period, in seconds, to wait before purging expired items from the cache Time period in seconds 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS Time period before expiry when cache elements are attempted to be refreshed Time period in seconds 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX Max number of items that can be stored in the cache at any time. Set to 0 to disable caching Valid number 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES The number of times unreachable hosts are retried during a cache update loop Valid number 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended. true / false false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS Sets the number of seconds to wait before terminating requests to 3scale System and Backend Time period in seconds 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed Time period in seconds 60 PARAM_USE_CACHE_BACKEND If true, attempt to create an in-memory apisonator cache for authorization requests true / false false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale Time period in seconds 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests true / false true 1.24.3. status parameter The status parameter describes the current state of your service mesh. This information is generated by the Operator and is read-only. Table 1.37. Istio status parameters Name Description Type observedGeneration The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The status.conditions are not up-to-date if the status.observedGeneration field doesn't match metadata.generation . integer annotations The annotations parameter stores additional, usually redundant status information, such as the number of components deployed by the ServiceMeshControlPlane object. These statuses are used by the command line tool, oc , which does not yet allow counting objects in JSONPath expressions. Not configurable readiness The readiness status of components and owned resources. string operatorVersion The version of the Operator that last processed this resource. string components Shows the status of each deployed Service Mesh control plane component. string appliedSpec The resulting specification of the configuration options after all profiles have been applied. ControlPlaneSpec conditions Represents the latest available observations of the object's current state. Reconciled indicates that the Operator has finished reconciling the actual state of deployed components with the configuration in the ServiceMeshControlPlane resource. Installed indicates that the Service Mesh control plane has been installed. Ready indicates that all Service Mesh control plane components are ready. string chartVersion The version of the charts that were last processed for this resource. string appliedValues The resulting values.yaml file that was used to generate the charts. ControlPlaneSpec 1.24.4. Additional resources For more information about how to configure the features in the ServiceMeshControlPlane resource, see the following links: Security Traffic management Metrics and traces 1.25. Kiali configuration reference When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances. 1.25.1. Specifying Kiali configuration in the SMCP You can configure Kiali under the addons section of the ServiceMeshControlPlane resource. Kiali is enabled by default. To disable Kiali, set spec.addons.kiali.enabled to false . You can specify your Kiali configuration in either of two ways: Specify the Kiali configuration in the ServiceMeshControlPlane resource under spec.addons.kiali.install . This approach has some limitations, because the complete list of Kiali configurations is not available in the SMCP. Configure and deploy a Kiali instance and specify the name of the Kiali resource as the value for spec.addons.kiali.name in the ServiceMeshControlPlane resource. You must create the CR in the same namespace as the Service Mesh control plane, for example, istio-system . If a Kiali resource matching the value of name exists, the control plane will configure that Kiali resource for use with the control plane. This approach lets you fully customize your Kiali configuration in the Kiali resource. Note that with this approach, various fields in the Kiali resource are overwritten by the Service Mesh Operator, specifically, the accessible_namespaces list, as well as the endpoints for Grafana, Prometheus, and tracing. Example SMCP parameters for Kiali apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali Table 1.38. ServiceMeshControlPlane Kiali parameters Parameter Description Values Default value Name of Kiali custom resource. If a Kiali CR matching the value of name exists, the Service Mesh Operator will use that CR for the installation. If no Kiali CR exists, the Operator will create one using this name and the configuration options specified in the SMCP. string kiali This parameter enables or disables Kiali. Kiali is enabled by default. true / false true Install a Kiali resource if the named Kiali resource is not present. The install section is ignored if addons.kiali.enabled is set to false . Configuration parameters for the dashboards shipped with Kiali. This parameter enables or disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the Kiali console to make changes to the Service Mesh. true / false false Grafana endpoint configured based on spec.addons.grafana configuration. true / false true Prometheus endpoint configured based on spec.addons.prometheus configuration. true / false true Tracing endpoint configured based on Jaeger custom resource configuration. true / false true Configuration parameters for the Kubernetes service associated with the Kiali installation. Use to specify additional metadata to apply to resources. N/A N/A Use to specify additional annotations to apply to the component's service. string N/A Use to specify additional labels to apply to the component's service. string N/A Use to specify details for accessing the component's service through an OpenShift Route. N/A N/A Use to specify additional annotations to apply to the component's service ingress. string N/A Use to specify additional labels to apply to the component's service ingress. string N/A Use to customize an OpenShift Route for the service associated with a component. true / false true Use to specify the context path to the service. string N/A Use to specify a single hostname per OpenShift route. An empty hostname implies a default hostname for the Route. string N/A Use to configure the TLS for the OpenShift route. N/A Use to specify the nodePort for the component's service Values.<component>.service.nodePort.port integer N/A 1.25.2. Specifying Kiali configuration in a Kiali custom resource You can fully customize your Kiali deployment by configuring Kiali in the Kiali custom resource (CR) rather than in the ServiceMeshControlPlane (SMCP) resource. This configuration is sometimes called an "external Kiali" since the configuration is specified outside of the SMCP. Note You must deploy the ServiceMeshControlPlane and Kiali custom resources in the same namespace. For example, istio-system . You can configure and deploy a Kiali instance and then specify the name of the Kiali resource as the value for spec.addons.kiali.name in the SMCP resource. If a Kiali CR matching the value of name exists, the Service Mesh control plane will use the existing installation. This approach lets you fully customize your Kiali configuration. 1.26. Jaeger configuration reference When the Service Mesh Operator deploys the ServiceMeshControlPlane resource, it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing. 1.26.1. Enabling and disabling tracing You enable distributed tracing by specifying a tracing type and a sampling rate in the ServiceMeshControlPlane resource. Default all-in-one Jaeger parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger Currently, the only tracing type that is supported is Jaeger . Jaeger is enabled by default. To disable tracing, set type to None . The sampling rate determines how often the Envoy proxy generates a trace. You can use the sampling rate option to control what percentage of requests get reported to your tracing system. You can configure this setting based upon your traffic in the mesh and the amount of tracing data you want to collect. You configure sampling as a scaled integer representing 0.01% increments. For example, setting the value to 10 samples 0.1% of traces, setting the value to 500 samples 5% of traces, and a setting of 10000 samples 100% of traces. Note The SMCP sampling configuration option controls the Envoy sampling rate. You configure the Jaeger trace sampling rate in the Jaeger custom resource. 1.26.2. Specifying Jaeger configuration in the SMCP You configure Jaeger under the addons section of the ServiceMeshControlPlane resource. However, there are some limitations to what you can configure in the SMCP. When the SMCP passes configuration information to the Red Hat OpenShift distributed tracing platform Operator, it triggers one of three deployment strategies: allInOne , production , or streaming . 1.26.3. Deploying the distributed tracing platform The distributed tracing platform has predefined deployment strategies. You specify a deployment strategy in the Jaeger custom resource (CR) file. When you create an instance of the distributed tracing platform, the Red Hat OpenShift distributed tracing platform Operator uses this configuration file to create the objects necessary for the deployment. The Red Hat OpenShift distributed tracing platform Operator currently supports the following deployment strategies: allInOne (default) - This strategy is intended for development, testing, and demo purposes and it is not for production use. The main back-end components, Agent, Collector, and Query service, are all packaged into a single executable, which is configured (by default) to use in-memory storage. You can configure this deployment strategy in the SMCP. Note In-memory storage is not persistent, which means that if the Jaeger instance shuts down, restarts, or is replaced, your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. production - The production strategy is intended for production environments, where long term storage of trace data is important, and a more scalable and highly available architecture is required. Each back-end component is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type, which is currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. You can configure this deployment strategy in the SMCP, but in order to be fully customized, you must specify your configuration in the Jaeger CR and link that to the SMCP. streaming - The streaming strategy is designed to augment the production strategy by providing a streaming capability that sits between the Collector and the Elasticsearch back-end storage. This provides the benefit of reducing the pressure on the back-end storage, under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the streaming platform ( AMQ Streams / Kafka ). You cannot configure this deployment strategy in the SMCP; you must configure a Jaeger CR and link that to the SMCP. Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. 1.26.3.1. Default distributed tracing platform deployment If you do not specify Jaeger configuration options, the ServiceMeshControlPlane resource will use the allInOne Jaeger deployment strategy by default. When using the default allInOne deployment strategy, set spec.addons.jaeger.install.storage.type to Memory . You can accept the defaults or specify additional configuration options under install . Control plane default Jaeger parameters (Memory) apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory 1.26.3.2. Production distributed tracing platform deployment (minimal) To use the default settings for the production deployment strategy, set spec.addons.jaeger.install.storage.type to Elasticsearch and specify additional configuration options under install . Note that the SMCP only supports configuring Elasticsearch resources and image name. Control plane default Jaeger parameters (Elasticsearch) apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {} 1.26.3.3. Production distributed tracing platform deployment (fully customized) The SMCP supports only minimal Elasticsearch parameters. To fully customize your production environment and access all of the Elasticsearch configuration parameters, use the Jaeger custom resource (CR) to configure Jaeger. Create and configure your Jaeger instance and set spec.addons.jaeger.name to the name of the Jaeger instance, in this example: MyJaegerInstance . Control plane with linked Jaeger production CR apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true 1.26.3.4. Streaming Jaeger deployment To use the streaming deployment strategy, you create and configure your Jaeger instance first, then set spec.addons.jaeger.name to the name of the Jaeger instance, in this example: MyJaegerInstance . Control plane with linked Jaeger streaming CR apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR 1.26.4. Specifying Jaeger configuration in a Jaeger custom resource You can fully customize your Jaeger deployment by configuring Jaeger in the Jaeger custom resource (CR) rather than in the ServiceMeshControlPlane (SMCP) resource. This configuration is sometimes referred to as an "external Jaeger" since the configuration is specified outside of the SMCP. Note You must deploy the SMCP and Jaeger CR in the same namespace. For example, istio-system . You can configure and deploy a standalone Jaeger instance and then specify the name of the Jaeger resource as the value for spec.addons.jaeger.name in the SMCP resource. If a Jaeger CR matching the value of name exists, the Service Mesh control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration. 1.26.4.1. Deployment best practices Red Hat OpenShift distributed tracing instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform instance name the tracing data should be reported to. If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform instance to each tenant namespace. Agent as a daemonset is not supported for multitenant installations or Red Hat OpenShift Dedicated. Agent as a sidecar is the only supported configuration for these use cases. If you are installing distributed tracing as part of Red Hat OpenShift Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource. For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 1.26.4.2. Configuring distributed tracing security for service mesh The distributed tracing platform uses OAuth for default authentication. However Red Hat OpenShift Service Mesh uses a secret called htpasswd to facilitate communication between dependent services such as Grafana, Kiali, and the distributed tracing platform. When you configure your distributed tracing platform in the ServiceMeshControlPlane the Service Mesh automatically configures security settings to use htpasswd . If you are specifying your distributed tracing platform configuration in a Jaeger custom resource, you must manually configure the htpasswd settings and ensure the htpasswd secret is mounted into your Jaeger instance so that Kiali can communicate with it. 1.26.4.2.1. Configuring distributed tracing security for service mesh from the OpenShift console You can modify the Jaeger resource to configure distributed tracing platform security for use with Service Mesh in the OpenShift console. Prerequisites You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. The Red Hat OpenShift Service Mesh Operator must be installed. The ServiceMeshControlPlane deployed to the cluster. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators. Click the Project menu and select the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift distributed tracing platform Operator . On the Operator Details page, click the Jaeger tab. Click the name of your Jaeger instance. On the Jaeger details page, click the YAML tab to modify your configuration. Edit the Jaeger custom resource file to add the htpasswd configuration as shown in the following example. spec.ingress.openshift.htpasswdFile spec.volumes spec.volumeMounts Example Jaeger resource showing htpasswd configuration apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true Click Save . 1.26.4.2.2. Configuring distributed tracing security for service mesh from the command line You can modify the Jaeger resource to configure distributed tracing platform security for use with Service Mesh from the command line using the oc utility. Prerequisites You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. The Red Hat OpenShift Service Mesh Operator must be installed. The ServiceMeshControlPlane deployed to the cluster. You have access to the OpenShift CLI (oc) that matches your OpenShift Container Platform version. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login https://<HOSTNAME>:6443 Change to the project where you installed the control plane, for example istio-system , by entering the following command: USD oc project istio-system Run the following command to edit the Jaeger custom resource file, where jaeger.yaml is the name of your Jaeger custom resource. USD oc edit -n tracing-system -f jaeger.yaml Edit the Jaeger custom resource file to add the htpasswd configuration as shown in the following example. spec.ingress.openshift.htpasswdFile spec.volumes spec.volumeMounts Example Jaeger resource showing htpasswd configuration apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true Run the following command to apply your changes, where <jaeger.yaml> is the name of your Jaeger custom resource. USD oc apply -n tracing-system -f <jaeger.yaml> Run the following command to watch the progress of the pod deployment: USD oc get pods -n tracing-system -w 1.26.4.3. Distributed tracing default configuration options The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform resources. You can modify these parameters to customize your distributed tracing platform implementation to your business needs. Jaeger generic YAML example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {} Table 1.39. Jaeger parameters Parameter Description Values Default value apiVersion: API version to use when creating the object. jaegertracing.io/v1 jaegertracing.io/v1 kind: Defines the kind of Kubernetes object to create. jaeger metadata: Data that helps uniquely identify the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. name: Name for the object. The name of your distributed tracing platform instance. jaeger-all-in-one-inmemory spec: Specification for the object to be created. Contains all of the configuration parameters for your distributed tracing platform instance. When a common definition for all Jaeger components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node. N/A strategy: Jaeger deployment strategy allInOne , production , or streaming allInOne allInOne: Because the allInOne image deploys the Agent, Collector, Query, Ingester, and Jaeger UI in a single pod, configuration for this deployment must nest component configuration under the allInOne parameter. agent: Configuration options that define the Agent. collector: Configuration options that define the Jaeger Collector. sampling: Configuration options that define the sampling strategies for tracing. storage: Configuration options that define the storage. All storage-related options must be placed under storage , rather than under the allInOne or other component options. query: Configuration options that define the Query service. ingester: Configuration options that define the Ingester service. The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform deployment using the default settings. Example minimum required dist-tracing-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory 1.26.4.4. Jaeger Collector configuration options The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production strategy, or to AMQ Streams when using the streaming strategy. The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster. Table 1.40. Parameters used by the Operator to define the Jaeger Collector Parameter Description Values Specifies the number of Collector replicas to create. Integer, for example, 5 Table 1.41. Configuration parameters passed to the Collector Parameter Description Values Configuration options that define the Jaeger Collector. The number of workers pulling from the queue. Integer, for example, 50 The size of the Collector queue. Integer, for example, 2000 The topic parameter identifies the Kafka configuration used by the Collector to produce the messages, and the Ingester to consume the messages. Label for the producer. Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform Operator will self-provision Kafka. Logging level for the Collector. Possible values: debug , info , warn , error , fatal , panic . 1.26.4.5. Distributed tracing sampling configuration options The Red Hat OpenShift distributed tracing platform Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler. While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. Note This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again. distributed tracing platform libraries support the following samplers: Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param property. For example, using sampling.param=0.1 samples approximately 1 in 10 traces. Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0 samples requests with the rate of 2 traces per second. Table 1.42. Jaeger sampling options Parameter Description Values Default value Configuration options that define the sampling strategies for tracing. If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. Sampling strategy to use. See descriptions above. Valid values are probabilistic , and ratelimiting . probabilistic Parameters for the selected sampling strategy. Decimal and integer values (0, .1, 1, 10) 1 This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled. Probabilistic sampling example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5 If there are no user-supplied configurations, the distributed tracing platform uses the following settings: Default sampling spec: sampling: options: default_strategy: type: probabilistic param: 1 1.26.4.6. Distributed tracing storage configuration options You configure storage for the Collector, Ingester, and Query services under spec.storage . Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. Table 1.43. General storage parameters used by the Red Hat OpenShift distributed tracing platform Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory or elasticsearch . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments as the data does not persist if the pod is shut down. For production environments distributed tracing platform supports Elasticsearch for persistent storage. memory Name of the secret, for example tracing-secret . N/A Configuration options that define the storage. Table 1.44. Elasticsearch index cleaner parameters Parameter Description Values Default value When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. true / false true Number of days to wait before deleting an index. Integer value 7 Defines the schedule for how often to clean the Elasticsearch index. Cron expression "55 23 * * *" 1.26.4.6.1. Auto-provisioning an Elasticsearch instance When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage section of the custom resource file. The Red Hat OpenShift distributed tracing platform Operator will provision Elasticsearch if the following configurations are set: spec.storage:type is set to elasticsearch spec.storage.elasticsearch.doNotProvision set to false spec.storage.options.es.server-urls is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the Red Hat Elasticsearch Operator. When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name , the Operator uses elasticsearch . Restrictions You can have only one distributed tracing platform with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. There can be only one Elasticsearch per namespace. Note If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform Operator can use the installed OpenShift Elasticsearch Operator to provision storage. The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch in your configuration file. Table 1.45. Elasticsearch resource configuration parameters Parameter Description Values Default value Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform Operator. true / false true Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. string elasticsearch Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as "split brain" problem can happen. Integer value. For example, Proof of concept = 1, Minimum deployment =3 3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 1 Available memory for requests, based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* 16Gi Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform Operator automatically determines the most appropriate replication based on number of nodes. ZeroRedundancy (no replica shards), SingleRedundancy (one replica shard), MultipleRedundancy (each index is spread over half of the Data nodes), FullRedundancy (each index is fully replicated on every Data node in the cluster). Use to specify whether or not distributed tracing platform should use the certificate management feature of the Red Hat Elasticsearch Operator. This feature was added to logging subsystem for Red Hat OpenShift 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. true / false true *Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Production storage example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi Storage example with persistent storage: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy 1 Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform instance. You can mount the same volumes if you create a distributed tracing platform instance with the same name and namespace. 1.26.4.6.2. Connecting to an existing Elasticsearch instance You can use an existing Elasticsearch cluster for storage with distributed tracing. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform Operator or by the Red Hat Elasticsearch Operator. When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator will not provision Elasticsearch if the following configurations are set: spec.storage.elasticsearch.doNotProvision set to true spec.storage.options.es.server-urls has a value spec.storage.elasticsearch.name has a value, or if the Elasticsearch instance name is elasticsearch . The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name to connect to Elasticsearch. Restrictions You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. Note Red Hat does not provide support for your external Elasticsearch instance. You can review the tested integrations matrix on the Customer Portal . The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es in your custom resource file. Table 1.46. General ES configuration parameters Parameter Description Values Default value URL of the Elasticsearch instance. The fully-qualified domain name of the Elasticsearch server. http://elasticsearch.<namespace>.svc:9200 The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both es.max-doc-count and es.max-num-spans , Elasticsearch will use the smaller value of the two. 10000 [ Deprecated - Will be removed in a future release, use es.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. If you set both es.max-num-spans and es.max-doc-count , Elasticsearch will use the smaller value of the two. 10000 The maximum lookback for spans in Elasticsearch. 72h0m0s The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default true / false false Timeout used for queries. When set to zero there is no timeout. 0s The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es.password . The password required by Elasticsearch. See also, es.username . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Table 1.47. ES data replication parameters Parameter Description Values Default value The number of replicas per index in Elasticsearch. 1 The number of shards per index in Elasticsearch. 5 Table 1.48. ES index configuration parameters Parameter Description Values Default value Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false true Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". Table 1.49. ES bulk processor configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 1000 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 200ms The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 5000000 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 1 Table 1.50. ES TLS configuration parameters Parameter Description Values Default value Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. Table 1.51. ES archive configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 0 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 0s The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 0 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 0 Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false false Enable extra storage. true / false false Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. 0 [ Deprecated - Will be removed in a future release, use es-archive.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. 0 The maximum lookback for spans in Elasticsearch. 0s The number of replicas per index in Elasticsearch. 0 The number of shards per index in Elasticsearch. 0 The password required by Elasticsearch. See also, es.username . The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, http://localhost:9200 . The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Timeout used for queries. When set to zero there is no timeout. 0s Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es-archive.password . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Storage example with volume mounts apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret. External Elasticsearch example: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public 1 URL to Elasticsearch service running in default namespace. 2 TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. 3 Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic 4 Volume mounts and volumes which are mounted into all storage components. 1.26.4.7. Managing certificates with Elasticsearch You can create and manage certificates using the Red Hat Elasticsearch Operator. Managing certificates using the Red Hat Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors. Important Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Starting with version 2.4, the Red Hat OpenShift distributed tracing platform Operator delegates certificate creation to the Red Hat Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator" Where the <shared-es-node-name> is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es , your custom resource might look like the following example. Example Elasticsearch CR showing annotations apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy Prerequisites OpenShift Container Platform 4.7 logging subsystem for Red Hat OpenShift 5.2 The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system . You enable certificate management by setting spec.storage.elasticsearch.useCertManagement to true in the Jaeger custom resource. Example showing useCertManagement apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true The Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the Red Hat Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform Operator injects the certificates. For more information about configuring Elasticsearch with OpenShift Container Platform, see Configuring the log store or Configuring and deploying distributed tracing . 1.26.4.8. Query configuration options Query is a service that retrieves traces from storage and hosts the user interface to display them. Table 1.52. Parameters used by the Red Hat OpenShift distributed tracing platform Operator to define Query Parameter Description Values Default value Specifies the number of Query replicas to create. Integer, for example, 2 Table 1.53. Configuration parameters passed to Query Parameter Description Values Default value Configuration options that define the Query service. Logging level for Query. Possible values: debug , info , warn , error , fatal , panic . The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, /jaeger would cause all UI URLs to start with /jaeger . This can be useful when running jaeger-query behind a reverse proxy. /<path> Sample Query configuration apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "my-jaeger" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger 1.26.4.9. Ingester configuration options Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne or production deployment strategies, you do not need to configure the Ingester service. Table 1.54. Jaeger parameters passed to the Ingester Parameter Description Values Configuration options that define the Ingester service. Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating. The deadlock interval is disabled by default (set to 0 ), to avoid terminating the Ingester when no messages arrive during system initialization. Minutes and seconds, for example, 1m0s . Default value is 0 . The topic parameter identifies the Kafka configuration used by the collector to produce the messages, and the Ingester to consume the messages. Label for the consumer. For example, jaeger-spans . Identifies the Kafka configuration used by the Ingester to consume the messages. Label for the broker, for example, my-cluster-kafka-brokers.kafka:9092 . Logging level for the Ingester. Possible values: debug , info , warn , error , fatal , dpanic , panic . Streaming Collector and Ingester example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200 1.27. Uninstalling Service Mesh To uninstall Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance and remove its resources, you must delete the control plane, delete the Operators, and run commands to manually remove some resources. 1.27.1. Removing the Red Hat OpenShift Service Mesh control plane To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources. 1.27.1.1. Removing the Service Mesh control plane using the web console You can remove the Red Hat OpenShift Service Mesh control plane by using the web console. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Navigate to Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the ServiceMeshControlPlane menu . Click Delete Service Mesh Control Plane . Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane . 1.27.1.2. Removing the Service Mesh control plane using the CLI You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project. Procedure Log in to the OpenShift Container Platform CLI. Run the following command to delete the ServiceMeshMemberRoll resource. USD oc delete smmr -n istio-system default Run this command to retrieve the name of the installed ServiceMeshControlPlane : USD oc get smcp -n istio-system Replace <name_of_custom_resource> with the output from the command, and run this command to remove the custom resource: USD oc delete smcp -n istio-system <name_of_custom_resource> 1.27.2. Removing the installed Operators You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform Operator, and the OpenShift Elasticsearch Operator. 1.27.2.1. Removing the Operators Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators. Red Hat OpenShift Service Mesh Kiali Red Hat OpenShift distributed tracing platform OpenShift Elasticsearch Procedure Log in to the OpenShift Container Platform web console. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name. On the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator. 1.27.3. Clean up Operator resources You can manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console. Prerequisites An account with cluster administration access. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using distributed tracing platform as a stand-alone service without service mesh, do not delete the Jaeger resources. Note The OpenShift Elasticsearch Operator is installed in openshift-operators-redhat by default. The other Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed. USD oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete svc maistra-admission-controller -n openshift-operators USD oc -n openshift-operators delete ds -lmaistra-version USD oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni USD oc delete clusterrole istio-view istio-edit USD oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view USD oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete USD oc delete crds jaegers.jaegertracing.io USD oc delete cm -n openshift-operators maistra-operator-cabundle USD oc delete cm -n openshift-operators istio-cni-config istio-cni-config-v2-3 USD oc delete sa -n openshift-operators istio-cni
[ "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1", "kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }", "spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"", "apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: techPreview: global: pathNormalization: <option>", "oc create -f <myEnvoyFilterFile>", "apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end", "apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0", "api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^ibm.*\" - \"^kiali-operator\"", "spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020", "{\"level\":\"warn\",\"ts\":1642438880.918793,\"caller\":\"channelz/logging.go:62\",\"msg\":\"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \\\"transport: http2Server.HandleStreams received bogus greeting from client: \\\\\\\"\\\\\\\\x16\\\\\\\\x03\\\\\\\\x01\\\\\\\\x02\\\\\\\\x00\\\\\\\\x01\\\\\\\\x00\\\\\\\\x01\\\\\\\\xfc\\\\\\\\x03\\\\\\\\x03vw\\\\\\\\x1a\\\\\\\\xc9T\\\\\\\\xe7\\\\\\\\xdaCj\\\\\\\\xb7\\\\\\\\x8dK\\\\\\\\xa6\\\\\\\"\\\"\",\"system\":\"grpc\",\"grpc_log\":true}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 tracing: type: Jaeger sampling: 10000 addons: jaeger: name: jaeger install: storage: type: Memory kiali: enabled: true name: kiali grafana: enabled: true", "oc create -n istio-system -f <istio_installation.yaml>", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m", "oc login https://<HOSTNAME>:6443", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.1 66m", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 security: identity: type: ThirdParty #required setting for ROSA tracing: type: Jaeger sampling: 10000 policy: type: Istiod addons: grafana: enabled: true jaeger: install: storage: type: Memory kiali: enabled: true prometheus: enabled: true telemetry: type: Istiod", "apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: auth: strategy: openshift deployment: accessible_namespaces: #restricted setting for ROSA - istio-system image_pull_policy: '' ingress_enabled: true namespace: istio-system", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: annotations: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.3", "oc project istio-system", "oc get smcp -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3", "oc get smcp -o yaml", "oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml", "oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'", "oc edit smcp.v1.maistra.io <smcp_name>", "oc project istio-system", "oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml", "oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml", "oc new-project istio-system-upgrade", "oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml", "spec: policy: type: Mixer", "spec: telemetry: type: Mixer", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN", "#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check", "spec: tracing: sampling: 100 # 1% type: Jaeger", "spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"", "spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install", "oc rollout restart <deployment>", "oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>", "apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic", "oc policy add-role-to-user", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.3 security: dataPlane: mtls: true", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT", "oc create -n <namespace> -f <policy.yaml>", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "oc create -n <namespace> -f <destination-rule.yaml>", "kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]", "oc create -n istio-system -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]", "apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"", "apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts", "oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'", "oc -n bookinfo delete pods --all", "pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted", "oc get pods -n bookinfo", "sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway sidecar.istio.io/inject: \"true\" 1 spec: containers: - name: istio-proxy image: auto 2", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: gatewayingress namespace: istio-ingress spec: podSelector: matchLabels: istio: ingressgateway ingress: - {} policyTypes: - Ingress", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: labels: istio: ingressgateway release: istio name: ingressgatewayhpa namespace: istio-ingress spec: maxReplicas: 5 metrics: - resource: name: cpu target: averageUtilization: 80 type: Utilization type: Resource minReplicas: 2 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: istio: ingressgateway release: istio name: ingressgatewaypdb namespace: istio-ingress spec: minAvailable: 1 selector: matchLabels: istio: ingressgateway", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n istio-system get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false", "apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"", "oc apply -f sidecar.yaml", "oc get sidecar", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.3/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc get routes", "NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect", "curl \"http://USDGATEWAY_URL/productpage\"", "spec: addons: jaeger: name: distr-tracing-production", "spec: tracing: sampling: 100", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.3 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "oc get smcp basic -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.3 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: trust: domain: red-mesh.local", "spec: cluster: name:", "spec: cluster: network:", "spec: gateways: additionalEgress: <egressName>:", "spec: gateways: additionalEgress: <egressName>: enabled:", "spec: gateways: additionalEgress: <egressName>: requestedNetworkView:", "spec: gateways: additionalEgress: <egressName>: routerMode:", "spec: gateways: additionalEgress: <egressName>: service: metadata: labels: federation.maistra.io/egress-for:", "spec: gateways: additionalEgress: <egressName>: service: ports:", "spec: gateways: additionalIngress:", "spec: gateways: additionalIgress: <ingressName>: enabled:", "spec: gateways: additionalIngress: <ingressName>: routerMode:", "spec: gateways: additionalIngress: <ingressName>: service: type:", "spec: gateways: additionalIngress: <ingressName>: service: type:", "spec: gateways: additionalIngress: <ingressName>: service: metadata: labels: federation.maistra.io/ingress-for:", "spec: gateways: additionalIngress: <ingressName>: service: ports:", "spec: gateways: additionalIngress: <ingressName>: service: ports: nodePort:", "gateways: additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery", "kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local", "spec: security: trust: domain:", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project red-mesh-system", "oc edit -n red-mesh-system smcp red-mesh", "oc get smcp -n red-mesh-system", "NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "metadata: name:", "metadata: namespace:", "spec: remote: addresses:", "spec: remote: discoveryPort:", "spec: remote: servicePort:", "spec: gateways: ingress: name:", "spec: gateways: egress: name:", "spec: security: trustDomain:", "spec: security: clientID:", "spec: security: certificateChain: kind: ConfigMap name:", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "oc create -n red-mesh-system -f servicemeshpeer.yaml", "oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml", "status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo", "metadata: name:", "metadata: namespace:", "spec: exportRules: - type:", "spec: exportRules: - type: NameSelector nameSelector: namespace: name:", "spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews", "oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>", "oc create -n red-mesh-system -f export-to-green-mesh.yaml", "oc get exportedserviceset <PeerMeshExportedTo> -o yaml", "oc get exportedserviceset green-mesh -o yaml", "oc get exportedserviceset <PeerMeshExportedTo> -o yaml", "oc -n red-mesh-system get exportedserviceset green-mesh -o yaml", "status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings", "metadata: name:", "metadata: namespace:", "spec: importRules: - type:", "spec: importRules: - type: NameSelector nameSelector: namespace: name:", "spec: importRules: - type: NameSelector importAsLocal:", "spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project green-mesh-system", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings", "oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>", "oc create -n green-mesh-system -f import-from-red-mesh.yaml", "oc get importedserviceset <PeerMeshImportedInto> -o yaml", "oc get importedserviceset green-mesh -o yaml", "oc get importedserviceset <PeerMeshImportedInto> -o yaml", "oc -n green-mesh-system get importedserviceset/red-mesh -o yaml", "status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>", "oc edit -n green-mesh-system -f import-from-red-mesh.yaml", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m", "oc create -n <application namespace> -f <DestinationRule.yaml>", "oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "oc apply -f plugin.yaml", "schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100", "oc apply -f <extension>.yaml", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100", "oc apply -f threescale-wasm-auth-bookinfo.yaml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net", "oc apply -f service-entry-threescale-saas-backend.yml", "oc apply -f destination-rule-threescale-saas-backend.yml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net", "oc apply -f service-entry-threescale-saas-system.yml", "oc apply -f <destination-rule-threescale-saas-system.yml>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300", "apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1", "credentials: user_key: - query_string: keys: - user_key - header: keys: - user_key", "credentials: app_id: - header: keys: - app_id - query_string: keys: - app_id app_key: - header: keys: - app_key - query_string: keys: - app_key", "aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l", "credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key", "credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1", "credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n <istio-system>", "oc logs <istio-system>", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s", "oc logs -n openshift-operators <podName>", "oc logs -n openshift-operators istio-operator-bb49787db-zgr87", "oc get pods -n istio-system", "NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s", "oc get smcp -n <istio-system>", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s", "NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h", "oc describe smcp <smcp-name> -n <controlplane-namespace>", "oc describe smcp basic -n istio-system", "oc get jaeger -n <istio-system>", "NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m", "oc get kiali -n <istio-system>", "NAME AGE kiali 15m", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc edit smcp <smcp_name>", "spec: proxy: accessLogging: file: name: /dev/stdout #file name", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.3 gather <namespace>", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true", "logging:", "logging: componentLevels:", "logging: logLevels:", "logging: logAsJSON:", "validationMessages:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger", "tracing: sampling:", "tracing: type:", "spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali", "spec: addons: kiali: name:", "kiali: enabled:", "kiali: install:", "kiali: install: dashboard:", "kiali: install: dashboard: viewOnly:", "kiali: install: dashboard: enableGrafana:", "kiali: install: dashboard: enablePrometheus:", "kiali: install: dashboard: enableTracing:", "kiali: install: service:", "kiali: install: service: metadata:", "kiali: install: service: metadata: annotations:", "kiali: install: service: metadata: labels:", "kiali: install: service: ingress:", "kiali: install: service: ingress: metadata: annotations:", "kiali: install: service: ingress: metadata: labels:", "kiali: install: service: ingress: enabled:", "kiali: install: service: ingress: contextPath:", "install: service: ingress: hosts:", "install: service: ingress: tls:", "kiali: install: service: nodePort:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 100 type: Jaeger", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.3 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc login https://<HOSTNAME>:6443", "oc project istio-system", "oc edit -n tracing-system -f jaeger.yaml", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc apply -n tracing-system -f <jaeger.yaml>", "oc get pods -n tracing-system -w", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "collector: replicas:", "spec: collector: options: {}", "options: collector: num-workers:", "options: collector: queue-size:", "options: kafka: producer: topic: jaeger-spans", "options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092", "options: log-level:", "spec: sampling: options: {} default_strategy: service_strategy:", "default_strategy: type: service_strategy: type:", "default_strategy: param: service_strategy: param:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5", "spec: sampling: options: default_strategy: type: probabilistic param: 1", "spec: storage: type:", "storage: secretname:", "storage: options: {}", "storage: esIndexCleaner: enabled:", "storage: esIndexCleaner: numberOfDays:", "storage: esIndexCleaner: schedule:", "elasticsearch: properties: doNotProvision:", "elasticsearch: properties: name:", "elasticsearch: nodeCount:", "elasticsearch: resources: requests: cpu:", "elasticsearch: resources: requests: memory:", "elasticsearch: resources: limits: cpu:", "elasticsearch: resources: limits: memory:", "elasticsearch: redundancyPolicy:", "elasticsearch: useCertManagement:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy", "es: server-urls:", "es: max-doc-count:", "es: max-num-spans:", "es: max-span-age:", "es: sniffer:", "es: sniffer-tls-enabled:", "es: timeout:", "es: username:", "es: password:", "es: version:", "es: num-replicas:", "es: num-shards:", "es: create-index-templates:", "es: index-prefix:", "es: bulk: actions:", "es: bulk: flush-interval:", "es: bulk: size:", "es: bulk: workers:", "es: tls: ca:", "es: tls: cert:", "es: tls: enabled:", "es: tls: key:", "es: tls: server-name:", "es: token-file:", "es-archive: bulk: actions:", "es-archive: bulk: flush-interval:", "es-archive: bulk: size:", "es-archive: bulk: workers:", "es-archive: create-index-templates:", "es-archive: enabled:", "es-archive: index-prefix:", "es-archive: max-doc-count:", "es-archive: max-num-spans:", "es-archive: max-span-age:", "es-archive: num-replicas:", "es-archive: num-shards:", "es-archive: password:", "es-archive: server-urls:", "es-archive: sniffer:", "es-archive: sniffer-tls-enabled:", "es-archive: timeout:", "es-archive: tls: ca:", "es-archive: tls: cert:", "es-archive: tls: enabled:", "es-archive: tls: key:", "es-archive: tls: server-name:", "es-archive: token-file:", "es-archive: username:", "es-archive: version:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true", "spec: query: replicas:", "spec: query: options: {}", "options: log-level:", "options: query: base-path:", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger", "spec: ingester: options: {}", "options: deadlockInterval:", "options: kafka: consumer: topic:", "options: kafka: consumer: brokers:", "options: log-level:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete svc maistra-admission-controller -n openshift-operators", "oc -n openshift-operators delete ds -lmaistra-version", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete cm -n openshift-operators maistra-operator-cabundle", "oc delete cm -n openshift-operators istio-cni-config istio-cni-config-v2-3", "oc delete sa -n openshift-operators istio-cni" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/service_mesh/service-mesh-2-x
Chapter 3. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode
Chapter 3. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode Red Hat OpenShift Data Foundation can use an externally hosted Red Hat Ceph Storage (RHCS) cluster as the storage provider on Red Hat OpenStack Platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS cluster, see the installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: Install the OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating an OpenShift Data foundation Cluster for external mode You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on Red Hat OpenStack platform. Prerequisites Ensure the OpenShift Container Platform version is 4.18 or above before deploying OpenShift Data Foundation 4.18. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode. For more details, see Troubleshooting CephFS PVC creation in external mode . Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . Red Hat recommends that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster. Procedure Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation Create Instance link of Storage Cluster. Select Mode as External . By default, Internal is selected as deployment mode. Figure 3.1. Connect to external cluster section on Create Storage Cluster form In the Connect to external cluster section, click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with admin key . Run the following command on the RHCS node to view the list of available arguments. Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. Note You can also run the script from inside a MON container (containerized deployment) or from a MON node (rpm deployment). To retrieve the external cluster details from the RHCS cluster, run the following command For example: In the above example, --rbd-data-pool-name is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> --monitoring-endpoint is optional. It is the IP address of the active ceph-mgr reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. -- run-as-user is an optional parameter used for providing a name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Click External cluster metadata Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Figure 3.2. Json file content Click Create . The Create button is enabled only after you upload the .json file. Verification steps Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green tick mark. Click Operators Installed Operators Storage Cluster link to view the storage cluster installation status. Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation . 3.3. Verifying your OpenShift Data Foundation installation for external mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.3.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 3.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 3.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that both Storage Cluster and Data Resiliency have a green tick. In the Details card, verify that the cluster information is displayed as follows. + Service Name:: OpenShift Data Foundation Cluster Name:: ocs-external-storagecluster Provider:: OpenStack Mode:: External Version:: ocs-operator-4.17.0 For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details were included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 3.3.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 3.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 3.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true. 3.4. Uninstalling OpenShift Data Foundation 3.4.1. Uninstalling OpenShift Data Foundation from external storage system Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 3.2. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. Remove CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely: In the OpenShift Container Platform Web Console, click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 3.4.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 3.4.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 3.4.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "python3 ceph-external-cluster-details-exporter.py --help", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"client.healthchecker\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"ceph-rbd\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}]", "oc get cephcluster -n openshift-storage", "NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH ocs-external-storagecluster-cephcluster 31m15s Connected Cluster connected successfully HEALTH_OK", "oc get storagecluster -n openshift-storage", "NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 31m15s Ready true 2021-02-29T20:43:04Z 4.17.0", "oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc name> -n <project name>", "oc delete pvc <pvc name> -n <project-name>", "oc delete -n openshift-storage storagesystem --all --wait=true", "oc project default oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc get pv oc delete pv <pv name>", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: emptyDir: {} . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploying_openshift_data_foundation_on_red_hat_openstack_platform_in_external_mode
Support
Support OpenShift Container Platform 4.10 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/support/index
Appendix E. Red Hat Virtualization and encrypted communication
Appendix E. Red Hat Virtualization and encrypted communication E.1. Replacing the Red Hat Virtualization Manager CA Certificate You can configure your organization's third-party CA certificate to authenticate users connecting to the Red Hat Virtualization Manager over HTTPS. Third-party CA certificates are not used for authentication between the Manager and hosts or for disk transfer URLs . These HTTPS connections use the self-signed certificate generated by the Manager. Important When you switch to a custom HTTPS certificate, you must use your own CA certificate distribution to make that certificate available on clients. If you are integrating with Red Hat Satellite, you need to manually import the correct certificate into Satellite. If you received the private key and certificate from your CA in a P12 file, use the following procedure to extract them. For other file formats, contact your CA. After extracting the private key and certificate, proceed to Replacing the Red Hat Virtualization Manager Apache CA Certificate . E.1.1. Extracting the Certificate and Private Key from a P12 Bundle The internal CA stores the internally generated key and certificate in a P12 file, in /etc/pki/ovirt-engine/keys/apache.p12 . Store your new file in the same location. The following procedure assumes that the new P12 file is in /tmp/apache.p12 . Warning Do not change the permissions and ownerships for the /etc/pki directory or any subdirectories. The permission for the /etc/pki and the /etc/pki/ovirt-engine directory must remain as the default, 755 . Procedure Back up the current apache.p12 file: # cp -p /etc/pki/ovirt-engine/keys/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12.bck Replace the current file with the new file: # cp /tmp/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12 Extract the private key and certificate to the required locations: # openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nocerts -nodes > /tmp/apache.key # openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nokeys > /tmp/apache.cer If the file is password protected, add -passin pass: password to the command, replacing password with the required password. Important For new Red Hat Virtualization installations, you must complete all of the steps in this procedure. E.1.2. Replacing the Red Hat Virtualization Manager Apache CA Certificate You configure your organization's third-party CA certificate to authenticate users connecting to the Administration Portal and the VM Portal over HTTPS. Warning Do not change the permissions and ownerships for the /etc/pki directory or any subdirectories. The permission for the /etc/pki and the /etc/pki/ovirt-engine directory must remain as the default, 755 . Prerequisites Third-party CA (Certificate Authority) certificate. It is provided as a PEM file. The certificate chain must be complete up to the root certificate. The chain's order is critical and must be from the last intermediate certificate to the root certificate. This procedure assumes that the third-party CA certificate is provided in /tmp/3rd-party-ca-cert.pem . Private key that you want to use for Apache httpd. It must not have a password. This procedure assumes that it is located in /tmp/apache.key . Certificate issued by the CA. This procedure assumes that it is located in /tmp/apache.cer . Procedure If you are using a self-hosted engine, put the environment into global maintenance mode. # hosted-engine --set-maintenance --mode=global For more information, see Maintaining the Self-Hosted Engine . Add your CA certificate to the host-wide trust store: # cp /tmp/3rd-party-ca-cert.pem /etc/pki/ca-trust/source/anchors # update-ca-trust The Manager has been configured to use /etc/pki/ovirt-engine/apache-ca.pem , which is symbolically linked to /etc/pki/ovirt-engine/ca.pem . Remove the symbolic link: # rm /etc/pki/ovirt-engine/apache-ca.pem Save your CA certificate as /etc/pki/ovirt-engine/apache-ca.pem : # cp /tmp/3rd-party-ca-cert.pem /etc/pki/ovirt-engine/apache-ca.pem Back up the existing private key and certificate: # cp /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache.key.nopass.bck # cp /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache.cer.bck Copy the private key to the required location: # cp /tmp/apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass Set the private key owner to root and set the permissions to 0640 : # chown root:ovirt /etc/pki/ovirt-engine/keys/apache.key.nopass # chmod 640 /etc/pki/ovirt-engine/keys/apache.key.nopass Copy the certificate to the required location: # cp /tmp/apache.cer /etc/pki/ovirt-engine/certs/apache.cer Set the certificate owner to root and set the permissions to 0644 : # chown root:ovirt /etc/pki/ovirt-engine/certs/apache.cer # chmod 644 /etc/pki/ovirt-engine/certs/apache.cer Restart the Apache server: # systemctl restart httpd.service Create a new trust store configuration file, /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf , with the following parameters: ENGINE_HTTPS_PKI_TRUST_STORE="/etc/pki/java/cacerts" ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD="" Copy the /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf file, and rename it with an index number that is greater than 10 (for example, 99-setup.conf ). Add the following parameters to the new file: Restart the websocket-proxy service: # systemctl restart ovirt-websocket-proxy.service If you manually changed the /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf file, or are using a configuration file from an older installation, make sure that the Manager is still configured to use /etc/pki/ovirt-engine/apache-ca.pem as the certificate source. Create the /etc/ovirt-engine-backup/engine-backup-config.d directory: # mkdir -p /etc/ovirt-engine-backup/engine-backup-config.d Create the /etc/ovirt-engine-backup/engine-backup-config.d/update-system-wide-pki.sh file with the following content. This enables ovirt-engine-backup to automatically update the system on restore. BACKUP_PATHS="USD{BACKUP_PATHS} /etc/ovirt-engine-backup" cp -f /etc/pki/ovirt-engine/apache-ca.pem \ /etc/pki/ca-trust/source/anchors/ 3rd-party-ca-cert .pem update-ca-trust Restart the ovirt-provider-ovn service: # systemctl restart ovirt-provider-ovn.service Restart the ovirt-imageio service: # systemctl restart ovirt-imageio.service Restart the ovirt-engine service: # systemctl restart ovirt-engine.service If you are using a self-hosted engine, turn off global maintenance mode: # hosted-engine --set-maintenance --mode=none Your users can now connect to the Administration Portal and VM Portal without seeing a certificate warning.
[ "cp -p /etc/pki/ovirt-engine/keys/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12.bck", "cp /tmp/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12", "openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nocerts -nodes > /tmp/apache.key openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nokeys > /tmp/apache.cer", "hosted-engine --set-maintenance --mode=global", "cp /tmp/3rd-party-ca-cert.pem /etc/pki/ca-trust/source/anchors update-ca-trust", "rm /etc/pki/ovirt-engine/apache-ca.pem", "cp /tmp/3rd-party-ca-cert.pem /etc/pki/ovirt-engine/apache-ca.pem", "cp /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache.key.nopass.bck cp /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache.cer.bck", "cp /tmp/apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass", "chown root:ovirt /etc/pki/ovirt-engine/keys/apache.key.nopass chmod 640 /etc/pki/ovirt-engine/keys/apache.key.nopass", "cp /tmp/apache.cer /etc/pki/ovirt-engine/certs/apache.cer", "chown root:ovirt /etc/pki/ovirt-engine/certs/apache.cer chmod 644 /etc/pki/ovirt-engine/certs/apache.cer", "systemctl restart httpd.service", "ENGINE_HTTPS_PKI_TRUST_STORE=\"/etc/pki/java/cacerts\" ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=\"\"", "SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass", "systemctl restart ovirt-websocket-proxy.service", "mkdir -p /etc/ovirt-engine-backup/engine-backup-config.d", "BACKUP_PATHS=\"USD{BACKUP_PATHS} /etc/ovirt-engine-backup\" cp -f /etc/pki/ovirt-engine/apache-ca.pem /etc/pki/ca-trust/source/anchors/ 3rd-party-ca-cert .pem update-ca-trust", "systemctl restart ovirt-provider-ovn.service", "systemctl restart ovirt-imageio.service", "systemctl restart ovirt-engine.service", "hosted-engine --set-maintenance --mode=none" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-Red_Hat_Enterprise_Virtualization_and_SSL
Containerized Ansible Automation Platform installation guide
Containerized Ansible Automation Platform installation guide Red Hat Ansible Automation Platform 2.4 Containerized Ansible Automation Platform Installation Guide Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/containerized_ansible_automation_platform_installation_guide/index
14.16.3. Determining a Compatible CPU Model to Suit a Pool of Host Physical Machines
14.16.3. Determining a Compatible CPU Model to Suit a Pool of Host Physical Machines Now that it is possible to find out what CPU capabilities a single host physical machine has, the step is to determine what CPU capabilities are best to expose to the guest virtual machine. If it is known that the guest virtual machine will never need to be migrated to another host physical machine, the host physical machine CPU model can be passed straight through unmodified. A virtualized data center may have a set of configurations that can guarantee all servers will have 100% identical CPUs. Again the host physical machine CPU model can be passed straight through unmodified. The more common case, though, is where there is variation in CPUs between host physical machines. In this mixed CPU environment, the lowest common denominator CPU must be determined. This is not entirely straightforward, so libvirt provides an API for exactly this task. If libvirt is provided a list of XML documents, each describing a CPU model for a host physical machine, libvirt will internally convert these to CPUID masks, calculate their intersection, and convert the CPUID mask result back into an XML CPU description. Here is an example of what libvirt reports as the capabilities on a basic workstation, when the virsh capabilities is executed: <capabilities> <host> <cpu> <arch>i686</arch> <model>pentium3</model> <topology sockets='1' cores='2' threads='1'/> <feature name='lahf_lm'/> <feature name='lm'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='ssse3'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pni'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='sse2'/> <feature name='acpi'/> <feature name='ds'/> <feature name='clflush'/> <feature name='apic'/> </cpu> </host> </capabilities> Figure 14.3. Pulling host physical machine's CPU model information Now compare that to any random server, with the same virsh capabilities command: <capabilities> <host> <cpu> <arch>x86_64</arch> <model>phenom</model> <topology sockets='2' cores='4' threads='1'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='misalignsse'/> <feature name='sse4a'/> <feature name='abm'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='lahf_lm'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='popcnt'/> <feature name='cx16'/> <feature name='ht'/> <feature name='vme'/> </cpu> ...snip... Figure 14.4. Generate CPU description from a random server To see if this CPU description is compatible with the workstation CPU description, use the virsh cpu-compare command. The reduced content was stored in a file named virsh-caps-workstation-cpu-only.xml and the virsh cpu-compare command can be executed on this file: As seen in this output, libvirt is correctly reporting that the CPUs are not strictly compatible. This is because there are several features in the server CPU that are missing in the client CPU. To be able to migrate between the client and the server, it will be necessary to open the XML file and comment out some features. To determine which features need to be removed, run the virsh cpu-baseline command, on the both-cpus.xml which contains the CPU information for both machines. Running # virsh cpu-baseline both-cpus.xml , results in: <cpu match='exact'> <model>pentium3</model> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='lm'/> <feature policy='require' name='cx16'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pni'/> <feature policy='require' name='ht'/> <feature policy='require' name='sse2'/> <feature policy='require' name='clflush'/> <feature policy='require' name='apic'/> </cpu> Figure 14.5. Composite CPU baseline This composite file shows which elements are in common. Everything that is not in common should be commented out.
[ "<capabilities> <host> <cpu> <arch>i686</arch> <model>pentium3</model> <topology sockets='1' cores='2' threads='1'/> <feature name='lahf_lm'/> <feature name='lm'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='ssse3'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pni'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='sse2'/> <feature name='acpi'/> <feature name='ds'/> <feature name='clflush'/> <feature name='apic'/> </cpu> </host> </capabilities>", "<capabilities> <host> <cpu> <arch>x86_64</arch> <model>phenom</model> <topology sockets='2' cores='4' threads='1'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='misalignsse'/> <feature name='sse4a'/> <feature name='abm'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='lahf_lm'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='popcnt'/> <feature name='cx16'/> <feature name='ht'/> <feature name='vme'/> </cpu> ...snip", "virsh cpu-compare virsh-caps-workstation-cpu-only.xml Host physical machine CPU is a superset of CPU described in virsh-caps-workstation-cpu-only.xml", "<cpu match='exact'> <model>pentium3</model> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='lm'/> <feature policy='require' name='cx16'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pni'/> <feature policy='require' name='ht'/> <feature policy='require' name='sse2'/> <feature policy='require' name='clflush'/> <feature policy='require' name='apic'/> </cpu>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-guest_virtual_machine_cpu_model_configuration-determining_a_compatible_cpu_model_to_suit_a_pool_of_host_physical_machines
Chapter 1. Vaults in IdM
Chapter 1. Vaults in IdM This chapter describes vaults in Identity Management (IdM). It introduces the following topics: The concept of the vault . The different roles associated with a vault . The different types of vaults available in IdM based on the level of security and access control . The different types of vaults available in IdM based on ownership . The concept of vault containers . The basic commands for managing vaults in IdM . Installing the key recovery authority (KRA), which is a prerequisite for using vaults in IdM . 1.1. Vaults and their benefits A vault is a useful feature for those Identity Management (IdM) users who want to keep all their sensitive data stored securely but conveniently in one place. There are various types of vaults and you should choose which vault to use based on your requirements. A vault is a secure location in (IdM) for storing, retrieving, sharing, and recovering a secret. A secret is security-sensitive data, usually authentication credentials, that only a limited group of people or entities can access. For example, secrets include: Passwords PINs Private SSH keys A vault is comparable to a password manager. Just like a password manager, a vault typically requires a user to generate and remember one primary password to unlock and access any information stored in the vault. However, a user can also decide to have a standard vault. A standard vault does not require the user to enter any password to access the secrets stored in the vault. Note The purpose of vaults in IdM is to store authentication credentials that allow you to authenticate to external, non-IdM-related services. Other important characteristics of the IdM vaults are: Vaults are only accessible to the vault owner and those IdM users that the vault owner selects to be the vault members. In addition, the IdM administrator has access to the vault. If a user does not have sufficient privileges to create a vault, an IdM administrator can create the vault and set the user as its owner. Users and services can access the secrets stored in a vault from any machine enrolled in the IdM domain. One vault can only contain one secret, for example, one file. However, the file itself can contain multiple secrets such as passwords, keytabs or certificates. Note Vault is only available from the IdM command line (CLI), not from the IdM Web UI. 1.2. Vault owners, members, and administrators Identity Management (IdM) distinguishes the following vault user types: Vault owner A vault owner is a user or service with basic management privileges on the vault. For example, a vault owner can modify the properties of the vault or add new vault members. Each vault must have at least one owner. A vault can also have multiple owners. Vault member A vault member is a user or service that can access a vault created by another user or service. Vault administrator Vault administrators have unrestricted access to all vaults and are allowed to perform all vault operations. Note Symmetric and asymmetric vaults are protected with a password or key and apply special access control rules (see Vault types ). The administrator must meet these rules to: Access secrets in symmetric and asymmetric vaults. Change or reset the vault password or key. A vault administrator is any user with the Vault Administrators privilege. In the context of the role-based access control (RBAC) in IdM, a privilege is a group of permissions that you can apply to a role. Vault User The vault user represents the user in whose container the vault is located. The Vault user information is displayed in the output of specific commands, such as ipa vault-show : For details on vault containers and user vaults, see Vault containers . Additional resources See Standard, symmetric and asymmetric vaults for details on vault types. 1.3. Standard, symmetric, and asymmetric vaults Based on the level of security and access control, IdM classifies vaults into the following types: Standard vaults Vault owners and vault members can archive and retrieve the secrets without having to use a password or key. Symmetric vaults Secrets in the vault are protected with a symmetric key. Vault owners and members can archive and retrieve the secrets, but they must provide the vault password. Asymmetric vaults Secrets in the vault are protected with an asymmetric key. Users archive the secret using a public key and retrieve it using a private key. Vault members can only archive secrets, while vault owners can do both, archive and retrieve secrets. 1.4. User, service, and shared vaults Based on ownership, IdM classifies vaults into several types. The table below contains information about each type, its owner and use. Table 1.1. IdM vaults based on ownership Type Description Owner Note User vault A private vault for a user A single user Any user can own one or more user vaults if allowed by IdM administrator Service vault A private vault for a service A single service Any service can own one or more user vaults if allowed by IdM administrator Shared vault A vault shared by multiple users and services The vault administrator who created the vault Users and services can own one or more user vaults if allowed by IdM administrator. The vault administrators other than the one that created the vault also have full access to the vault. 1.5. Vault containers A vault container is a collection of vaults. The table below lists the default vault containers that Identity Management (IdM) provides. Table 1.2. Default vault containers in IdM Type Description Purpose User container A private container for a user Stores user vaults for a particular user Service container A private container for a service Stores service vaults for a particular service Shared container A container for multiple users and services Stores vaults that can be shared by multiple users or services IdM creates user and service containers for each user or service automatically when the first private vault for the user or service is created. After the user or service is deleted, IdM removes the container and its contents. 1.6. Basic IdM vault commands You can use the basic commands outlined below to manage Identity Management (IdM) vaults. The table below contains a list of ipa vault-* commands with the explanation of their purpose. Note Before running any ipa vault-* command, install the Key Recovery Authority (KRA) certificate system component on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . Table 1.3. Basic IdM vault commands with explanations Command Purpose ipa help vault Displays conceptual information about IdM vaults and sample vault commands. ipa vault-add --help , ipa vault-find --help Adding the --help option to a specific ipa vault-* command displays the options and detailed help available for that command. ipa vault-show user_vault --user idm_user When accessing a vault as a vault member, you must specify the vault owner. If you do not specify the vault owner, IdM informs you that it did not find the vault: ipa vault-show shared_vault --shared When accessing a shared vault, you must specify that the vault you want to access is a shared vault. Otherwise, IdM informs you it did not find the vault: 1.7. Installing the Key Recovery Authority in IdM Follow this procedure to enable vaults in Identity Management (IdM) by installing the Key Recovery Authority (KRA) Certificate System (CS) component on a specific IdM server. Prerequisites You are logged in as root on the IdM server. An IdM certificate authority is installed on the IdM server. You have the Directory Manager credentials. Procedure Install the KRA: Important You can install the first KRA of an IdM cluster on a hidden replica. However, installing additional KRAs requires temporarily activating the hidden replica before you install the KRA clone on a non-hidden replica. Then you can hide the originally hidden replica again. Note To make the vault service highly available and resilient, install the KRA on two IdM servers or more. Maintaining multiple KRA servers prevents data loss. Additional resources Demoting or promoting hidden replicas The hidden replica mode
[ "ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user", "[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found", "[admin@server ~]USD ipa vault-show shared_vault ipa: ERROR: shared_vault: vault not found", "ipa-kra-install" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_vaults_in_identity_management/vaults-in-idm_working-with-vaults-in-identity-management
Chapter 14. Monitoring using qdstat
Chapter 14. Monitoring using qdstat The qdstat tool is a command-line tool for monitoring the status and performance of AMQ Interconnect router networks. 14.1. Syntax for using qdstat You can use qdstat with the following syntax: This specifies: An option for the type of information to view. One or more optional connection options to specify a router for which to view the information. If you do not specify a connection option, qdstat connects to the router listening on localhost and the default AMQP port (5672). The secure connection options if the router for which you want to view information only accepts secure connections. Additional resources For more information about qdstat , see the qdstat man page . 14.2. Commands for monitoring the router network You can use qdstat to view the status of routers on your router network. For example, you can view information about the attached links and configured addresses, available connections, and nodes in the router network. To... Use this command... Create a state dump containing all statistics for all routers A state dump shows the current operational state of the router network. If you run this command on an interior router, it displays the statistics for all interior routers. If you run the command on an edge router, it displays the statistics for only that edge router. Create a state dump containing a single statistic for all routers If you run this command on an interior router, it displays the statistic for all interior routers. If you run the command on an edge router, it displays the statistic for only that edge router. Create a state dump containing all statistics for a single router This command shows the statistics for the local router only. View general statistics for a router View a list of connections to a router View the AMQP links attached to a router You can view a list of AMQP links attached to the router from clients (sender/receiver), from or to other routers into the network, to other containers (for example, brokers), and from the tool itself. View known routers on the router network View the addresses known to a router View a router's autolinks View the status of a router's link routes View a router's policy global settings and statistics View a router's policy vhost settings View a router's policy vhost statistics View a router's vhostgroup settings View a router's memory consumption Additional resources For more information about the fields displayed by each qdstat command, see the qdstat man page .
[ "qdstat <option> [ <connection-options> ] [ <secure-connection-options> ]", "qdstat --all-routers --all-entities", "qdstat -l|-a|-c|--autolinks|--linkroutes|-g|-m --all-routers", "qdstat --all-entities", "qdstat -g [all-routers| <connection-options> ]", "qdstat -c [all-routers| <connection-options> ]", "qdstat -l [all-routers| <connection-options> ]", "qdstat -n [all-routers| <connection-options> ]", "qdstat -a [all-routers| <connection-options> ]", "qdstat --autolinks [all-routers| <connection-options> ]", "qdstat --linkroutes [all-routers| <connection-options> ]", "qdstat --policy [all-routers| <connection-options> ]", "qdstat --vhosts [all-routers| <connection-options> ]", "qdstat --vhoststats [all-routers| <connection-options> ]", "qdstat --vhostgroups [all-routers| <connection-options> ]", "qdstat -m [all-routers| <connection-options> ]" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/monitoring-using-qdstat-router-rhel
Service Mesh
Service Mesh Red Hat OpenShift Service on AWS 4 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: ENABLE_NATIVE_SIDECARS: \"true\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"false\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true", "spec: meshConfig discoverySelectors: - matchLabels: env: prod region: us-east1 - matchExpressions: - key: app operator: In values: - cassandra - spark", "spec: meshConfig: extensionProviders: - name: prometheus prometheus: {} --- apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics spec: metrics: - providers: - name: prometheus", "spec: techPreview: gatewayAPI: enabled: true", "spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: cluster-wide namespace: istio-system spec: version: v2.3 techPreview: controlPlaneMode: ClusterScoped 1", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - '*' 1", "kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\" | kubectl apply -f -; }", "spec: runtime: components: pilot: container: env: PILOT_ENABLE_GATEWAY_API: \"true\" PILOT_ENABLE_GATEWAY_API_STATUS: \"true\" # and optionally, for the deployment controller PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: \"true\"", "apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway spec: addresses: - value: ingress.istio-gateways.svc.cluster.local type: Hostname", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: trust: manageNetworkPolicy: false", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: techPreview: meshConfig: defaultConfig: proxyMetadata: HTTP_STRIP_FRAGMENT_FROM_PATH_UNSAFE_IF_DISABLED: \"false\"", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: techPreview: global: pathNormalization: <option>", "oc create -f <myEnvoyFilterFile>", "apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingress-case-insensitive namespace: istio-system spec: configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: \"envoy.filters.network.http_connection_manager\" subFilter: name: \"envoy.filters.http.router\" patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: \"@type\": \"type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\" inlineCode: | function envoy_on_request(request_handle) local path = request_handle:headers():get(\":path\") request_handle:headers():replace(\":path\", string.lower(path)) end", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled gateways: ingress: enabled: true", "label namespace istio-system istio-discovery=enabled", "2023-05-02T15:20:42.541034Z error watch error in cluster Kubernetes: failed to list *v1alpha2.TLSRoute: the server could not find the requested resource (get tlsroutes.gateway.networking.k8s.io) 2023-05-02T15:20:42.616450Z info kube controller \"gateway.networking.k8s.io/v1alpha2/TCPRoute\" is syncing", "kubectl get crd gateways.gateway.networking.k8s.io || { kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=v0.5.1\" | kubectl apply -f -; }", "apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: mesh-wide-concurrency namespace: <istiod-namespace> spec: concurrency: 0", "api: namespaces: exclude: - \"^istio-operator\" - \"^kube-.*\" - \"^openshift.*\" - \"^kiali-operator\"", "spec: proxy: networking: trafficControl: inbound: excludedPorts: - 15020", "spec: runtime: components: pilot: container: env: APPLY_WASM_PLUGINS_TO_INBOUND_ONLY: \"true\"", "error Installer exits with open /host/etc/cni/multus/net.d/v2-2-istio-cni.kubeconfig.tmp.841118073: no such file or directory", "oc label namespace istio-system maistra.io/ignore-namespace-", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system", "oc -n istio-system edit smcp <name> 1", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide meshConfig: discoverySelectors: - matchLabels: istio-discovery: enabled 1 - matchExpressions: - key: kubernetes.io/metadata.name 2 operator: In values: - bookinfo - httpbin - istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: memberSelectors: - matchLabels: istio-injection: enabled 1", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80", "oc edit deployment -n <namespace> <deploymentName>", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: annotations: sidecar.istio.io/inject: 'true' 1 labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-without-sidecar spec: selector: matchLabels: app: nginx-without-sidecar template: metadata: labels: app: nginx-without-sidecar 2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-usernamepolicy spec: action: ALLOW rules: - when: - key: 'request.regex.headers[username]' values: - \"allowed.*\" selector: matchLabels: app: httpbin", "oc new-project istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 security: identity: type: ThirdParty 1 tracing: type: None sampling: 10000 policy: type: Istiod addons: grafana: enabled: true kiali: enabled: true prometheus: enabled: true telemetry: type: Istiod", "oc create -n istio-system -f <istio_installation.yaml>", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.6.6 66m", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide 1 security: identity: type: ThirdParty 2 tracing: type: Jaeger sampling: 10000 policy: type: Istiod addons: grafana: enabled: true jaeger: install: storage: type: Memory kiali: enabled: true prometheus: enabled: true telemetry: type: Istiod", "oc new-project istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide 1 security: identity: type: ThirdParty 2", "oc create -n istio-system -f <istio_installation.yaml>", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-b4d59bd7-mrgbr 2/2 Running 0 65m istio-egressgateway-678dc97b4c-wrjkp 1/1 Running 0 108s istio-ingressgateway-b45c9d54d-4qg6n 1/1 Running 0 108s istiod-basic-55d78bbbcd-j5556 1/1 Running 0 108s jaeger-67c75bd6dc-jv6k6 2/2 Running 0 65m kiali-6476c7656c-x5msp 1/1 Running 0 43m prometheus-58954b8d6b-m5std 2/2 Running 0 66m", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: my-application spec: controlPlaneRef: namespace: istio-system name: basic", "oc apply -f <file-name>", "oc get smm default -n my-application", "NAME CONTROL PLANE READY AGE default istio-system/basic True 2m11s", "oc describe smmr default -n istio-system", "Name: default Namespace: istio-system Labels: <none> Status: Configured Members: default my-application Members: default my-application", "oc edit smmr default -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: memberSelectors: 1 - matchLabels: 2 mykey: myvalue 3 - matchLabels: 4 myotherkey: myothervalue 5", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: gateways: openshiftRoute: enabled: true", "An error occurred admission webhook smcp.validation.maistra.io denied the request: [support for policy.type \"Mixer\" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type \"Mixer\" and telemetry.Mixer options have been removed in v2.1, please use another alternative]\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: policy: type: Istiod telemetry: type: Istiod version: v2.6", "oc project istio-system", "oc get smcp -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6", "oc get smcp -o yaml", "oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml #Edit the smcp-resource.yaml file. oc replace -f smcp-resource.yaml", "oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{\"op\": \"replace\",\"path\":\"/spec/path/to/bad/setting\",\"value\":\"corrected-value\"}]'", "oc edit smcp.v1.maistra.io <smcp_name>", "oc project istio-system", "oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml", "oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml", "oc new-project istio-system-upgrade", "oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml", "spec: policy: type: Mixer", "spec: telemetry: type: Mixer", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-disable namespace: <namespace> spec: targets: - name: productpage", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-disable namespace: <namespace> spec: mtls: mode: DISABLE selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage", "apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: targets: - name: productpage ports: - number: 9000 peers: - mtls: origins: - jwt: issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" jwtHeaders: - \"x-goog-iap-jwt-assertion\" triggerRules: - excludedPaths: - exact: /health_check principalBinding: USE_ORIGIN", "#require mtls for productpage:9000 apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage portLevelMtls: 9000: mode: STRICT --- #JWT authentication for productpage apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage jwtRules: - issuer: \"https://securetoken.google.com\" audiences: - \"productpage\" jwksUri: \"https://www.googleapis.com/oauth2/v1/certs\" fromHeaders: - name: \"x-goog-iap-jwt-assertion\" --- #Require JWT token to access product page service from #any client to all paths except /health_check apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: productpage-mTLS-with-JWT namespace: <namespace> spec: action: ALLOW selector: matchLabels: # this should match the selector for the \"productpage\" service app: productpage rules: - to: # require JWT token to access all other paths - operation: notPaths: - /health_check from: - source: # if using principalBinding: USE_PEER in the Policy, # then use principals, e.g. # principals: # - \"*\" requestPrincipals: - \"*\" - to: # no JWT token required to access health_check - operation: paths: - /health_check", "spec: tracing: sampling: 100 # 1% type: Jaeger", "spec: addons: jaeger: name: jaeger install: storage: type: Memory # or Elasticsearch for production mode memory: maxTraces: 100000 elasticsearch: # the following values only apply if storage:type:=Elasticsearch storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional) size: \"100G\" storageClassName: \"storageclass\" nodeCount: 3 redundancyPolicy: SingleRedundancy runtime: components: tracing.jaeger: {} # general Jaeger specific runtime configuration (optional) tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional) container: resources: requests: memory: \"1Gi\" cpu: \"500m\" limits: memory: \"1Gi\"", "spec: addons: grafana: enabled: true install: {} # customize install kiali: enabled: true name: kiali install: {} # customize install", "oc rollout restart <deployment>", "oc policy add-role-to-user -n istio-system --role-namespace istio-system mesh-user <user_name>", "apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default spec: controlPlaneRef: namespace: istio-system name: basic", "oc policy add-role-to-user", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: istio-system name: mesh-users roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mesh-user subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc create configmap --from-file=<profiles-directory> smcp-templates -n openshift-operators", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - default", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: version: v2.6 security: dataPlane: mtls: true", "apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: <namespace> spec: mtls: mode: STRICT", "oc create -n <namespace> -f <policy.yaml>", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: default namespace: <namespace> spec: host: \"*.<namespace>.svc.cluster.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "oc create -n <namespace> -f <destination-rule.yaml>", "kind: ServiceMeshControlPlane spec: security: controlPlane: tls: minProtocolVersion: TLSv1_2", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [\"1.2.3.4\"]", "oc create -n istio-system -f <filename>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: bookinfo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: [\"bookinfo\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: bookinfo spec: action: ALLOW rules: - {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: bookinfo spec: {}", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: ingress-policy namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [\"1.2.3.4\", \"5.6.7.0/24\"]", "apiVersion: \"security.istio.io/v1beta1\" kind: \"RequestAuthentication\" metadata: name: \"jwt-example\" namespace: bookinfo spec: selector: matchLabels: app: httpbin jwtRules: - issuer: \"http://localhost:8080/auth/realms/master\" jwksUri: \"http://keycloak.default.svc:8080/auth/realms/master/protocol/openid-connect/certs\"", "apiVersion: \"security.istio.io/v1beta1\" kind: \"AuthorizationPolicy\" metadata: name: \"frontend-ingress\" namespace: bookinfo spec: selector: matchLabels: app: httpbin action: DENY rules: - from: - source: notRequestPrincipals: [\"*\"]", "oc edit smcp <smcp-name>", "spec: security: dataPlane: mtls: true # enable mtls for data plane # JWKSResolver extra CA # PEM-encoded certificate content to trust an additional CA jwksResolverCA: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----", "kind: ConfigMap apiVersion: v1 data: extra.pem: | -----BEGIN CERTIFICATE----- [...] [...] -----END CERTIFICATE-----", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true certificateAuthority: type: Istiod istiod: type: PrivateKey privateKey: rootCADir: /etc/cacerts", "oc -n istio-system delete pods -l 'app in (istiod,istio-ingressgateway, istio-egressgateway)'", "oc -n bookinfo delete pods --all", "pod \"details-v1-6cd699df8c-j54nh\" deleted pod \"productpage-v1-5ddcb4b84f-mtmf2\" deleted pod \"ratings-v1-bdbcc68bc-kmng4\" deleted pod \"reviews-v1-754ddd7b6f-lqhsv\" deleted pod \"reviews-v2-675679877f-q67r2\" deleted pod \"reviews-v3-79d7549c7-c2gjs\" deleted", "oc get pods -n bookinfo", "sleep 60 oc -n bookinfo exec \"USD(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})\" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > \"proxy-cert-\" counter \".pem\"}' < certs.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: dataPlane: mtls: true", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 21600h # 900d secretName: root-ca commonName: root-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: selfsigned-root-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: root-ca spec: ca: secretName: root-ca", "oc apply -f cluster-issuer.yaml", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 21600h secretName: istio-ca commonName: istio-ca.my-company.net subject: organizations: - my-company.net issuerRef: name: root-ca kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca", "oc apply -n istio-system -f istio-ca.yaml", "helm install istio-csr jetstack/cert-manager-istio-csr -n istio-system -f deploy/examples/cert-manager/istio-csr/istio-csr.yaml", "replicaCount: 2 image: repository: quay.io/jetstack/cert-manager-istio-csr tag: v0.6.0 pullSecretName: \"\" app: certmanager: namespace: istio-system issuer: group: cert-manager.io kind: Issuer name: istio-ca controller: configmapNamespaceSelector: \"maistra.io/member-of=istio-system\" leaderElectionNamespace: istio-system istio: namespace: istio-system revisions: [\"basic\"] server: maxCertificateDuration: 5m tls: certificateDNSNames: # This DNS name must be set in the SMCP spec.security.certificateAuthority.cert-manager.address - cert-manager-istio-csr.istio-system.svc", "oc apply -f mesh.yaml -n istio-system", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: grafana: enabled: false kiali: enabled: false prometheus: enabled: false proxy: accessLogging: file: name: /dev/stdout security: certificateAuthority: cert-manager: address: cert-manager-istio-csr.istio-system.svc:443 type: cert-manager dataPlane: mtls: true identity: type: ThirdParty tracing: type: None --- apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - httpbin - sleep", "oc new-project <namespace>", "oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml", "oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml", "oc exec \"USD(oc get pod -l app=sleep -n <namespace> -o jsonpath={.items..metadata.name})\" -c sleep -n <namespace> -- curl http://httpbin.<namespace>:8000/ip -s -o /dev/null -w \"%{http_code}\\n\"", "200", "oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml", "INGRESS_HOST=USD(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}')", "curl -s -I http://USDINGRESS_HOST/headers -o /dev/null -w \"%{http_code}\" -s", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n istio-system get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: maistra.io/v1alpha1 kind: ServiceMeshControlPlane metadata: namespace: istio-system spec: gateways: openshiftRoute: enabled: false", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane spec: security: manageNetworkPolicy: false", "apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - \"./*\" - \"istio-system/*\"", "oc apply -f sidecar.yaml", "oc get sidecar", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc get routes", "NAME HOST/PORT SERVICES PORT TERMINATION bookinfo-gateway bookinfo-gateway-yourcompany.com istio-ingressgateway http2 grafana grafana-yourcompany.com grafana <all> reencrypt/Redirect istio-ingressgateway istio-ingress-yourcompany.com istio-ingressgateway 8080 jaeger jaeger-yourcompany.com jaeger-query <all> reencrypt kiali kiali-yourcompany.com kiali 20001 reencrypt/Redirect prometheus prometheus-yourcompany.com prometheus <all> reencrypt/Redirect", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: bookinfo 1 spec: mode: deployment config: | receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: otlp: endpoint: \"tempo-sample-distributor.tracing-system.svc.cluster.local:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp]", "oc logs -n bookinfo -l app.kubernetes.io/name=otel-collector", "kind: ServiceMeshControlPlane apiVersion: maistra.io/v2 metadata: name: basic namespace: istio-system spec: addons: grafana: enabled: false kiali: enabled: true prometheus: enabled: true meshConfig: extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.bookinfo.svc.cluster.local policy: type: Istiod telemetry: type: Istiod version: v2.6", "spec: tracing: type: None", "apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100", "apiVersion: kiali.io/v1alpha1 kind: Kiali spec: external_services: tracing: query_timeout: 30 1 enabled: true in_cluster_url: 'http://tempo-sample-query-frontend.tracing-system.svc.cluster.local:16685' url: '[Tempo query frontend Route url]' use_grpc: true 2", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: otel-disable-tls spec: host: \"otel-collector.bookinfo.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tempo namespace: tracing-system-mtls spec: host: \"*.tracing-system-mtls.svc.cluster.local\" trafficPolicy: tls: mode: DISABLE", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: kiali namespace: istio-system spec: host: kiali.istio-system.svc.cluster.local trafficPolicy: tls: mode: DISABLE", "spec: addons: jaeger: name: distr-tracing-production", "spec: tracing: sampling: 100", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kiali-monitoring-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view subjects: - kind: ServiceAccount name: kiali-service-account namespace: istio-system", "apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali namespace: istio-system spec: auth: strategy: openshift deployment: accessible_namespaces: #restricted setting for ROSA - istio-system image_pull_policy: '' ingress_enabled: true namespace: istio-system", "apiVersion: kiali.io/v1alpha1 kind: Kiali metadata: name: kiali-user-workload-monitoring namespace: istio-system spec: external_services: istio: config_map_name: istio-<smcp-name> istio_sidecar_injector_config_map_name: istio-sidecar-injector-<smcp-name> istiod_deployment_name: istiod-<smcp-name> url_service_version: 'http://istiod-<smcp-name>.istio-system:15014/version' prometheus: auth: token: secret:thanos-querier-web-token:token type: bearer use_kiali_token: false query_scope: mesh_id: \"basic-istio-system\" thanos_proxy: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: addons: prometheus: enabled: false 1 grafana: enabled: false 2 kiali: name: kiali-user-workload-monitoring meshConfig: extensionProviders: - name: prometheus prometheus: {}", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: user-workload-access namespace: istio-system 1 spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress", "apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: enable-prometheus-metrics namespace: istio-system 1 spec: selector: 2 matchLabels: app: bookinfo metrics: - providers: - name: prometheus", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: istiod-monitor namespace: istio-system 1 spec: targetLabels: - app selector: matchLabels: istio: pilot endpoints: - port: http-monitoring interval: 30s relabelings: - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: istio-proxies-monitor namespace: istio-system 1 spec: selector: matchExpressions: - key: istio-prometheus-ignore operator: DoesNotExist podMetricsEndpoints: - path: /stats/prometheus interval: 30s relabelings: - action: keep sourceLabels: [__meta_kubernetes_pod_container_name] regex: \"istio-proxy\" - action: keep sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape] - action: replace regex: (\\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) replacement: '[USD2]:USD1' sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: replace regex: (\\d+);((([0-9]+?)(\\.|USD)){4}) replacement: USD2:USD1 sourceLabels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] targetLabel: __address__ - action: labeldrop regex: \"__meta_kubernetes_pod_label_(.+)\" - sourceLabels: [__meta_kubernetes_namespace] action: replace targetLabel: namespace - sourceLabels: [__meta_kubernetes_pod_name] action: replace targetLabel: pod_name - action: replace replacement: \"basic-istio-system\" 2 targetLabel: mesh_id", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 600m memory: 50Mi limits: {} runtime: components: pilot: container: resources: requests: cpu: 1000m memory: 1.6Gi limits: {} kiali: container: resources: limits: cpu: \"90m\" memory: \"245Mi\" requests: cpu: \"30m\" memory: \"108Mi\" global.oauthproxy: container: resources: requests: cpu: \"101m\" memory: \"256Mi\" limits: cpu: \"201m\" memory: \"512Mi\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger addons: jaeger: name: MyJaeger install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "oc get smcp basic -o yaml", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: version: v2.6 runtime: defaults: container: imagePullPolicy: Always gateways: additionalEgress: egress-green-mesh: enabled: true requestedNetworkView: - green-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here egress-blue-mesh: enabled: true requestedNetworkView: - blue-network routerMode: sni-dnat service: metadata: labels: federation.maistra.io/egress-for: egress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: http-discovery #note HTTP here additionalIngress: ingress-green-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here ingress-blue-mesh: enabled: true routerMode: sni-dnat service: type: LoadBalancer metadata: labels: federation.maistra.io/ingress-for: ingress-blue-mesh ports: - port: 15443 name: tls - port: 8188 name: https-discovery #note HTTPS here security: identity: type: ThirdParty trust: domain: red-mesh.local", "spec: cluster: name:", "spec: cluster: network:", "spec: gateways: additionalEgress: <egress_name>:", "spec: gateways: additionalEgress: <egress_name>: enabled:", "spec: gateways: additionalEgress: <egress_name>: requestedNetworkView:", "spec: gateways: additionalEgress: <egress_name>: service: metadata: labels: federation.maistra.io/egress-for:", "spec: gateways: additionalEgress: <egress_name>: service: ports:", "spec: gateways: additionalIngress:", "spec: gateways: additionalIgress: <ingress_name>: enabled:", "spec: gateways: additionalIngress: <ingress_name>: service: type:", "spec: gateways: additionalIngress: <ingress_name>: service: type:", "spec: gateways: additionalIngress: <ingress_name>: service: metadata: labels: federation.maistra.io/ingress-for:", "spec: gateways: additionalIngress: <ingress_name>: service: ports:", "spec: gateways: additionalIngress: <ingress_name>: service: ports: nodePort:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: green-mesh namespace: green-mesh-system spec: gateways: additionalIngress: ingress-green-mesh: enabled: true service: type: NodePort metadata: labels: federation.maistra.io/ingress-for: ingress-green-mesh ports: - port: 15443 nodePort: 30510 name: tls - port: 8188 nodePort: 32359 name: https-discovery", "kind: ServiceMeshControlPlane metadata: name: red-mesh namespace: red-mesh-system spec: security: trust: domain: red-mesh.local", "spec: security: trust: domain:", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project red-mesh-system", "oc edit -n red-mesh-system smcp red-mesh", "oc get smcp -n red-mesh-system", "NAME READY STATUS PROFILES VERSION AGE red-mesh 10/10 ComponentsReady [\"default\"] 2.1.0 4m25s", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "metadata: name:", "metadata: namespace:", "spec: remote: addresses:", "spec: remote: discoveryPort:", "spec: remote: servicePort:", "spec: gateways: ingress: name:", "spec: gateways: egress: name:", "spec: security: trustDomain:", "spec: security: clientID:", "spec: security: certificateChain: kind: ConfigMap name:", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "kind: ServiceMeshPeer apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: remote: addresses: - ingress-red-mesh.green-mesh-system.apps.domain.com gateways: ingress: name: ingress-green-mesh egress: name: egress-green-mesh security: trustDomain: green-mesh.local clientID: green-mesh.local/ns/green-mesh-system/sa/egress-red-mesh-service-account certificateChain: kind: ConfigMap name: green-mesh-ca-root-cert", "oc create -n red-mesh-system -f servicemeshpeer.yaml", "oc -n red-mesh-system get servicemeshpeer green-mesh -o yaml", "status: discoveryStatus: active: - pod: istiod-red-mesh-b65457658-9wq5j remotes: - connected: true lastConnected: \"2021-10-05T13:02:25Z\" lastFullSync: \"2021-10-05T13:02:25Z\" source: 10.128.2.149 watch: connected: true lastConnected: \"2021-10-05T13:02:55Z\" lastDisconnectStatus: 503 Service Unavailable lastFullSync: \"2021-10-05T13:05:43Z\"", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: # export ratings.mesh-x-bookinfo as ratings.bookinfo - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: red-ratings alias: namespace: bookinfo name: ratings # export any service in red-mesh-bookinfo namespace with label export-service=true - type: LabelSelector labelSelector: namespace: red-mesh-bookinfo selector: matchLabels: export-service: \"true\" aliases: # export all matching services as if they were in the bookinfo namespace - namespace: \"*\" name: \"*\" alias: namespace: bookinfo", "metadata: name:", "metadata: namespace:", "spec: exportRules: - type:", "spec: exportRules: - type: NameSelector nameSelector: namespace: name:", "spec: exportRules: - type: NameSelector nameSelector: alias: namespace: name:", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue>", "spec: exportRules: - type: LabelSelector labelSelector: namespace: <exportingMesh> selector: matchLabels: <labelKey>: <labelValue> aliases: - namespace: name: alias: namespace: name:", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: blue-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: \"*\" name: ratings", "kind: ExportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project red-mesh-system", "apiVersion: federation.maistra.io/v1 kind: ExportedServiceSet metadata: name: green-mesh namespace: red-mesh-system spec: exportRules: - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: ratings alias: namespace: bookinfo name: red-ratings - type: NameSelector nameSelector: namespace: red-mesh-bookinfo name: reviews", "oc create -n <ControlPlaneNamespace> -f <ExportedServiceSet.yaml>", "oc create -n red-mesh-system -f export-to-green-mesh.yaml", "oc get exportedserviceset <PeerMeshExportedTo> -o yaml", "oc -n red-mesh-system get exportedserviceset green-mesh -o yaml", "status: exportedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.red-mesh-bookinfo.svc.cluster.local name: ratings namespace: red-mesh-bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: reviews.red-mesh-bookinfo.svc.cluster.local name: reviews namespace: red-mesh-bookinfo", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings", "metadata: name:", "metadata: namespace:", "spec: importRules: - type:", "spec: importRules: - type: NameSelector nameSelector: namespace: name:", "spec: importRules: - type: NameSelector importAsLocal:", "spec: importRules: - type: NameSelector nameSelector: namespace: name: alias: namespace: name:", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: blue-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: ratings", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: west-data-center name: \"*\"", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project green-mesh-system", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh namespace: green-mesh-system spec: importRules: - type: NameSelector importAsLocal: false nameSelector: namespace: bookinfo name: red-ratings alias: namespace: bookinfo name: ratings", "oc create -n <ControlPlaneNamespace> -f <ImportedServiceSet.yaml>", "oc create -n green-mesh-system -f import-from-red-mesh.yaml", "oc get importedserviceset <PeerMeshImportedInto> -o yaml", "oc -n green-mesh-system get importedserviceset/red-mesh -o yaml", "status: importedServices: - exportedName: red-ratings.bookinfo.svc.green-mesh-exports.local localService: hostname: ratings.bookinfo.svc.red-mesh-imports.local name: ratings namespace: bookinfo - exportedName: reviews.red-mesh-bookinfo.svc.green-mesh-exports.local localService: hostname: \"\" name: \"\" namespace: \"\"", "kind: ImportedServiceSet apiVersion: federation.maistra.io/v1 metadata: name: red-mesh #name of mesh that exported the service namespace: green-mesh-system #mesh namespace that service is being imported into spec: importRules: # first matching rule is used # import ratings.bookinfo as ratings.bookinfo - type: NameSelector importAsLocal: true nameSelector: namespace: bookinfo name: ratings alias: # service will be imported as ratings.bookinfo.svc.red-mesh-imports.local namespace: bookinfo name: ratings #Locality within which imported services should be associated. locality: region: us-west", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "oc edit -n <smcp-system> -f <ImportedServiceSet.yaml>", "oc edit -n green-mesh-system -f import-from-red-mesh.yaml", "oc login --username=<NAMEOFUSER> <API token> https://<HOSTNAME>:6443", "oc project <smcp-system>", "oc project green-mesh-system", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: default-failover namespace: bookinfo spec: host: \"ratings.bookinfo.svc.cluster.local\" trafficPolicy: loadBalancer: localityLbSetting: enabled: true failover: - from: us-east to: us-west outlierDetection: consecutive5xxErrors: 3 interval: 10s baseEjectionTime: 1m", "oc create -n <application namespace> -f <DestinationRule.yaml>", "oc create -n bookinfo -f green-mesh-us-west-DestinationRule.yaml", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway url: file:///opt/filters/openid.wasm sha256: 1ef0c9a92b0420cf25f7fe5d481b231464bc88f486ca3b9c83ed5cc21d2f6210 phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: openid-connect namespace: istio-system spec: selector: matchLabels: istio: ingressgateway url: oci://private-registry:5000/openid-connect/openid:latest imagePullPolicy: IfNotPresent imagePullSecret: private-registry-pull-secret phase: AUTHN pluginConfig: openid_server: authn openid_realm: ingress", "oc apply -f plugin.yaml", "schemaVersion: 1 name: <your-extension> description: <description> version: 1.0.0 phase: PreAuthZ priority: 100 module: extension.wasm", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.1 phase: PostAuthZ priority: 100", "oc apply -f <extension>.yaml", "apiVersion: maistra.io/v1 kind: ServiceMeshExtension metadata: name: header-append namespace: istio-system spec: workloadSelector: labels: app: httpbin config: first-header: some-value another-header: another-value image: quay.io/maistra-dev/header-append-filter:2.2 phase: PostAuthZ priority: 100", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: header-append namespace: istio-system spec: selector: matchLabels: app: httpbin url: oci://quay.io/maistra-dev/header-append-filter:2.2 phase: STATS pluginConfig: first-header: some-value another-header: another-value", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> 1 spec: selector: 2 labels: app: <product_page> pluginConfig: <yaml_configuration> url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 phase: AUTHZ priority: 100", "oc apply -f threescale-wasm-auth-bookinfo.yaml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-backend spec: hosts: - su1.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-backend spec: host: su1.3scale.net trafficPolicy: tls: mode: SIMPLE sni: su1.3scale.net", "oc apply -f service-entry-threescale-saas-backend.yml", "oc apply -f destination-rule-threescale-saas-backend.yml", "apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: service-entry-threescale-saas-system spec: hosts: - multitenant.3scale.net ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: destination-rule-threescale-saas-system spec: host: multitenant.3scale.net trafficPolicy: tls: mode: SIMPLE sni: multitenant.3scale.net", "oc apply -f service-entry-threescale-saas-system.yml", "oc apply -f <destination-rule-threescale-saas-system.yml>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> namespace: <bookinfo> spec: pluginConfig: api: v1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: system: name: <saas_porta> upstream: <object> token: <my_account_token> ttl: 300", "apiVersion: maistra.io/v1 upstream: name: outbound|443||multitenant.3scale.net url: \"https://myaccount-admin.3scale.net/\" timeout: 5000", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: backend: name: backend upstream: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - id: \"2555417834789\" token: service_token authorities: - \"*.app\" - 0.0.0.0 - \"0.0.0.0:8443\" credentials: <object> mapping_rules: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: <array_of_lookup_queries> app_id: <array_of_lookup_queries> app_key: <array_of_lookup_queries>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: services: - credentials: user_key: - <source_type>: <object> - <source_type>: <object> app_id: - <source_type>: <object> app_key: - <source_type>: <object>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: pluginConfig: mapping_rules: - method: GET pattern: / usages: - name: hits delta: 1 - method: GET pattern: /products/ usages: - name: products delta: 1 - method: ANY pattern: /products/{id}/sold usages: - name: sales delta: 1 - name: products delta: 1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key>", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>", "aladdin:opensesame: Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - drop: head: 1 - base64_urlsafe - split: max: 2 app_key: - header: keys: - app_key", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - authorization ops: - split: separator: \" \" max: 2 - length: min: 2 - reverse - glob: - Basic - drop: tail: 1 - base64_urlsafe - split: max: 2 - test: if: length: min: 2 then: - strlen: max: 63 - or: - strlen: min: 1 - drop: tail: 1 - assert: - and: - reverse - or: - strlen: min: 8 - glob: - aladdin - admin", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - filter: path: - envoy.filters.http.jwt_authn - \"0\" keys: - azp - aud ops: - take: head: 1", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: services: credentials: app_id: - header: keys: - x-jwt-payload ops: - base64_urlsafe - json: - keys: - azp - aud - take: head: 1 ,,,", "apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: <threescale_wasm_plugin_name> spec: url: oci://registry.redhat.io/3scale-amp2/3scale-auth-wasm-rhel8:0.0.3 imagePullSecret: <optional_pull_secret_resource> phase: AUTHZ priority: 100 selector: labels: app: <product_page> pluginConfig: api: v1 system: name: <system_name> upstream: name: outbound|443||multitenant.3scale.net url: https://istiodevel-admin.3scale.net/ timeout: 5000 token: <token> backend: name: <backend_name> upstream: name: outbound|443||su1.3scale.net url: https://su1.3scale.net/ timeout: 5000 extensions: - no_body services: - id: '2555417834780' authorities: - \"*\" credentials: user_key: - query_string: keys: - <user_key> - header: keys: - <user_key> app_id: - query_string: keys: - <app_id> - header: keys: - <app_id> app_key: - query_string: keys: - <app_key> - header: keys: - <app_key>", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n istio-system", "oc logs istio-system", "oc get pods -n openshift-operators", "NAME READY STATUS RESTARTS AGE istio-operator-bb49787db-zgr87 1/1 Running 0 15s jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s", "oc get pods -n openshift-operators-redhat", "NAME READY STATUS RESTARTS AGE elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s", "oc logs -n openshift-operators <podName>", "oc logs -n openshift-operators istio-operator-bb49787db-zgr87", "oc get pods -n istio-system", "NAME READY STATUS RESTARTS AGE grafana-6776785cfc-6fz7t 2/2 Running 0 102s istio-egressgateway-5f49dd99-l9ppq 1/1 Running 0 103s istio-ingressgateway-6dc885c48-jjd8r 1/1 Running 0 103s istiod-basic-6c9cc55998-wg4zq 1/1 Running 0 2m14s jaeger-6865d5d8bf-zrfss 2/2 Running 0 100s kiali-579799fbb7-8mwc8 1/1 Running 0 46s prometheus-5c579dfb-6qhjk 2/2 Running 0 115s", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic 10/10 ComponentsReady [\"default\"] 2.1.3 4m2s", "NAME READY STATUS TEMPLATE VERSION AGE basic-install 10/10 UpdateSuccessful default v1.1 3d16h", "oc describe smcp <smcp-name> -n <controlplane-namespace>", "oc describe smcp basic -n istio-system", "oc get jaeger -n istio-system", "NAME STATUS VERSION STRATEGY STORAGE AGE jaeger Running 1.30.0 allinone memory 15m", "oc get kiali -n istio-system", "NAME AGE kiali 15m", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc get route -n istio-system jaeger -o jsonpath='{.spec.host}'", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc project istio-system", "oc edit smcp <smcp_name>", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: proxy: accessLogging: file: name: /dev/stdout #file name", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 proxy: runtime: container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi tracing: type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Mixer mixer: # only applies if policy.type: Mixer enableChecks: true failOpen: false telemetry: type: Istiod # or Mixer mixer: # only applies if telemetry.type: Mixer, for v1 telemetry sessionAffinity: false batching: maxEntries: 100 maxTime: 1s adapters: kubernetesenv: true stdio: enabled: true outputAsJSON: true addons: grafana: enabled: true install: config: env: {} envSecrets: {} persistence: enabled: true storageClassName: \"\" accessMode: ReadWriteOnce capacity: requests: storage: 5Gi service: ingress: contextPath: /grafana tls: termination: reencrypt kiali: name: kiali enabled: true install: # install kiali CR if not present dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali jaeger: name: jaeger install: storage: type: Elasticsearch # or Memory memory: maxTraces: 100000 elasticsearch: nodeCount: 3 storage: {} redundancyPolicy: SingleRedundancy indexCleaner: {} ingress: {} # jaeger ingress configuration runtime: components: pilot: deployment: replicas: 2 pod: affinity: {} container: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi grafana: deployment: {} pod: {} kiali: deployment: {} pod: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: general: logging: componentLevels: {} # misc: error logAsJSON: false validationMessages: true", "logging:", "logging: componentLevels:", "logging: logAsJSON:", "validationMessages:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: profiles: - YourProfileName", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger", "tracing: sampling:", "tracing: type:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: kiali: name: kiali enabled: true install: dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true service: ingress: contextPath: /kiali", "spec: addons: kiali: name:", "kiali: enabled:", "kiali: install:", "kiali: install: dashboard:", "kiali: install: dashboard: viewOnly:", "kiali: install: dashboard: enableGrafana:", "kiali: install: dashboard: enablePrometheus:", "kiali: install: dashboard: enableTracing:", "kiali: install: service:", "kiali: install: service: metadata:", "kiali: install: service: metadata: annotations:", "kiali: install: service: metadata: labels:", "kiali: install: service: ingress:", "kiali: install: service: ingress: metadata: annotations:", "kiali: install: service: ingress: metadata: labels:", "kiali: install: service: ingress: enabled:", "kiali: install: service: ingress: contextPath:", "install: service: ingress: hosts:", "install: service: ingress: tls:", "kiali: install: service: nodePort:", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 100 type: Jaeger", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger install: storage: type: Memory", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 10000 type: Jaeger addons: jaeger: name: jaeger #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: {}", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR install: storage: type: Elasticsearch ingress: enabled: true", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: version: v2.6 tracing: sampling: 1000 type: Jaeger addons: jaeger: name: MyJaegerInstance #name of Jaeger CR", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc login https://<HOSTNAME>:6443", "oc project istio-system", "oc edit -n openshift-distributed-tracing -f jaeger.yaml", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: ingress: enabled: true openshift: htpasswdFile: /etc/proxy/htpasswd/auth sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' options: {} resources: {} security: oauth-proxy volumes: - name: secret-htpasswd secret: secretName: htpasswd - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: trusted-ca-bundle optional: true name: trusted-ca-bundle volumeMounts: - mountPath: /etc/proxy/htpasswd name: secret-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: trusted-ca-bundle readOnly: true", "oc get pods -n openshift-distributed-tracing", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "collector: replicas:", "spec: collector: options: {}", "options: collector: num-workers:", "options: collector: queue-size:", "options: kafka: producer: topic: jaeger-spans", "options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092", "options: log-level:", "options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3", "options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3", "spec: sampling: options: {} default_strategy: service_strategy:", "default_strategy: type: service_strategy: type:", "default_strategy: param: service_strategy: param:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5", "spec: sampling: options: default_strategy: type: probabilistic param: 1", "spec: storage: type:", "storage: secretname:", "storage: options: {}", "storage: esIndexCleaner: enabled:", "storage: esIndexCleaner: numberOfDays:", "storage: esIndexCleaner: schedule:", "elasticsearch: properties: doNotProvision:", "elasticsearch: properties: name:", "elasticsearch: nodeCount:", "elasticsearch: resources: requests: cpu:", "elasticsearch: resources: requests: memory:", "elasticsearch: resources: limits: cpu:", "elasticsearch: resources: limits: memory:", "elasticsearch: redundancyPolicy:", "elasticsearch: useCertManagement:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy", "es: server-urls:", "es: max-doc-count:", "es: max-num-spans:", "es: max-span-age:", "es: sniffer:", "es: sniffer-tls-enabled:", "es: timeout:", "es: username:", "es: password:", "es: version:", "es: num-replicas:", "es: num-shards:", "es: create-index-templates:", "es: index-prefix:", "es: bulk: actions:", "es: bulk: flush-interval:", "es: bulk: size:", "es: bulk: workers:", "es: tls: ca:", "es: tls: cert:", "es: tls: enabled:", "es: tls: key:", "es: tls: server-name:", "es: token-file:", "es-archive: bulk: actions:", "es-archive: bulk: flush-interval:", "es-archive: bulk: size:", "es-archive: bulk: workers:", "es-archive: create-index-templates:", "es-archive: enabled:", "es-archive: index-prefix:", "es-archive: max-doc-count:", "es-archive: max-num-spans:", "es-archive: max-span-age:", "es-archive: num-replicas:", "es-archive: num-shards:", "es-archive: password:", "es-archive: server-urls:", "es-archive: sniffer:", "es-archive: sniffer-tls-enabled:", "es-archive: timeout:", "es-archive: tls: ca:", "es-archive: tls: cert:", "es-archive: tls: enabled:", "es-archive: tls: key:", "es-archive: tls: server-name:", "es-archive: token-file:", "es-archive: username:", "es-archive: version:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "spec: query: replicas:", "spec: query: options: {}", "options: log-level:", "options: query: base-path:", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger", "spec: ingester: options: {}", "options: deadlockInterval:", "options: kafka: consumer: topic:", "options: kafka: consumer: brokers:", "options: log-level:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete svc maistra-admission-controller -n openshift-operators", "oc -n openshift-operators delete ds -lmaistra-version", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc delete cm -n openshift-operators maistra-operator-cabundle", "oc delete cm -n openshift-operators istio-cni-config istio-cni-config-v2-3", "oc delete sa -n openshift-operators istio-cni" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/service_mesh/index
Chapter 4. Migrating isolated nodes to execution nodes
Chapter 4. Migrating isolated nodes to execution nodes Upgrading from version 1.x to the latest version of the Red Hat Ansible Automation Platform requires platform administrators to migrate data from isolated legacy nodes to execution nodes. This migration is necessary to deploy the automation mesh. This guide explains how to perform a side-by-side migration. This ensures that the data on your original automation environment remains untouched during the migration process. The migration process involves the following steps: Verify upgrade configurations. Backup original instance. Deploy new instance for a side-by-side upgrade. Recreate instance groups in the new instance using ansible controller. Restore original backup to new instance. Set up execution nodes and upgrade instance to Red Hat Ansible Automation Platform 2.4. Configure upgraded controller instance. 4.1. Prerequisites for upgrading Ansible Automation Platform Before you begin to upgrade Ansible Automation Platform, ensure your environment meets the following node and configuration requirements. 4.1.1. Node requirements The following specifications are required for the nodes involved in the Ansible Automation Platform upgrade process: 16 GB of RAM for controller nodes, database node, execution nodes and hop nodes. 4 CPUs for controller nodes, database nodes, execution nodes, and hop nodes. 150 GB+ disk space for database node. 40 GB+ disk space for non-database nodes. DHCP reservations use infinite leases to deploy the cluster with static IP addresses. DNS records for all nodes. Red Hat Enterprise Linux 8 or later 64-bit (x86) installed for all nodes. Chrony configured for all nodes. Python 3.9 or later for all content dependencies. 4.1.2. Automation controller configuration requirements The following automation controller configurations are required before you proceed with the Ansible Automation Platform upgrade process: Configuring NTP server using Chrony Each Ansible Automation Platform node in the cluster must have access to an NTP server. Use the chronyd to synchronize the system clock with NTP servers. This ensures that cluster nodes using SSL certificates that require validation do not fail if the date and time between nodes are not in sync. This is required for all nodes used in the upgraded Ansible Automation Platform cluster: Install chrony : # dnf install chrony --assumeyes Open /etc/chrony.conf using a text editor. Locate the public server pool section and modify it to include the appropriate NTP server addresses. Only one server is required, but three are recommended. Add the 'iburst' option to speed up the time it takes to properly sync with the servers: # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server <ntp-server-address> iburst Save changes within the /etc/chrony.conf file. Start the host and enable the chronyd daemon: # systemctl --now enable chronyd.service Verify the chronyd daemon status: # systemctl status chronyd.service Attaching Red Hat subscription on all nodes Red Hat Ansible Automation Platform requires you to have valid subscriptions attached to all nodes. You can verify that your current node has a Red Hat subscription by running the following command: # subscription-manager list --consumed If there is no Red Hat subscription attached to the node, see Attaching your Ansible Automation Platform subscription for more information. Creating non-root user with sudo privileges Before you upgrade Ansible Automation Platform, it is recommended to create a non-root user with sudo privileges for the deployment process. This user is used for: SSH connectivity. Passwordless authentication during installation. Privilege escalation (sudo) permissions. The following example uses ansible to name this user. On all nodes used in the upgraded Ansible Automation Platform cluster, create a non-root user named ansible and generate an SSH key: Create a non-root user: # useradd ansible Set a password for your user: # passwd ansible 1 Changing password for ansible. Old Password: New Password: Retype New Password: 1 Replace ansible with the non-root user from step 1, if using a different name Generate an ssh key as the user: USD ssh-keygen -t rsa Disable password requirements when using sudo : # echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansible Copying SSH keys to all nodes With the ansible user created, copy the ssh key to all the nodes used in the upgraded Ansible Automation Platform cluster. This ensures that when the Ansible Automation Platform installation runs, it can ssh to all the nodes without a password: USD ssh-copy-id [email protected] Note If running within a cloud provider, you might need to instead create an ~/.ssh/authorized_keys file containing the public key for the ansible user on all your nodes and set the permissions to the authorized_keys file to only the owner ( ansible ) having read and write access (permissions 600). Configuring firewall settings Configure the firewall settings on all the nodes used in the upgraded Ansible Automation Platform cluster to allow access to the appropriate services and ports for a successful Ansible Automation Platform upgrade. For Red Hat Enterprise Linux 8 or later, enable the firewalld daemon to enable the access needed for all nodes: Install the firewalld package: # dnf install firewalld --assumeyes Start the firewalld service: # systemctl start firewalld Enable the firewalld service: # systemctl enable --now firewalld 4.1.3. Ansible Automation Platform configuration requirements The following Ansible Automation Platform configurations are required before you proceed with the Ansible Automation Platform upgrade process: Configuring firewall settings for execution and hop nodes After upgrading your Red Hat Ansible Automation Platform instance, add the automation mesh port on the mesh nodes (execution and hop nodes) to enable automation mesh functionality. The default port used for the mesh networks on all nodes is 27199/tcp . You can configure the mesh network to use a different port by specifying recptor_listener_port as the variable for each node within your inventory file. Within your hop and execution node set the firewalld port to be used for installation. Ensure that firewalld is running: USD sudo systemctl status firewalld Add the firewalld port to your controller database node (e.g. port 27199): USD sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp Reload firewalld : USD sudo firewall-cmd --reload Confirm that the port is open: USD sudo firewall-cmd --list-ports 4.2. Back up your Ansible Automation Platform instance Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment: Navigate to your ansible-tower-setup-latest directory. Run the ./setup.sh script following the example below: USD ./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_compression=True' @credentials.yml -b 1 2 1 backup_dir specifies a directory to save your backup to. 2 @credentials.yml passes the password variables and their values encrypted via ansible-vault . With a successful backup, a backup file is created at /ansible/mybackup/tower-backup-latest.tar.gz . This backup will be necessary later to migrate content from your old instance to the new one. 4.3. Deploy a new instance for a side-by-side upgrade To proceed with the side-by-side upgrade process, deploy a second instance of Ansible Tower 3.8.x with the same instance group configurations. This new instance will receive the content and configuration from your original instance, and will later be upgraded to Red Hat Ansible Automation Platform 2.4. 4.3.1. Deploy a new instance of Ansible Tower To deploy a new Ansible Tower instance, do the following: Download the Tower installer version that matches your original Tower instance by navigating to the Ansible Tower installer page . Navigate to the installer, then open the inventory file using a text editor to configure the inventory file for a Tower installation: In addition to any Tower configurations, remove any fields containing isolated_group or instance_group . Note For more information about installing Tower using the Ansible Automation Platform installer, see the Ansible Automation Platform Installation Guide for your specific installation scenario. Run the setup.sh script to begin the installation. Once the new instance is installed, configure the Tower settings to match the instance groups from your original Tower instance. 4.3.2. Recreate instance groups in the new instance To recreate your instance groups in the new instance, do the following: Note Make note of all instance groups from your original Tower instance. You will need to recreate these groups in your new instance. Log in to your new instance of Tower. In the navigation pane, select Administration Instance Groups . Click Create instance group . Enter a Name that matches an instance group from your original instance, then click Save Repeat until all instance groups from your original instance have been recreated. 4.4. Restore backup to new instance Running the ./setup.sh script with the restore_backup_file flag migrates content from the backup file of your original 1.x instance to the new instance. This effectively migrates all job histories, templates, and other Ansible Automation Platform related content. Procedure Run the following command: USD ./setup.sh -r -e 'restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz' -e 'use_compression=True' -e @credentials.yml -r -- --ask-vault-pass 1 2 3 1 restore_backup_file specifies the location of the Ansible Automation Platform backup database 2 use_compression is set to True due to compression being used during the backup process 3 -r sets the restore database option to True Log in to your new RHEL 8 Tower 3.8 instance to verify whether the content from your original instance has been restored: Navigate to Administration Instance Groups . The recreated instance groups should now contain the Total Jobs from your original instance. Using the side navigation panel, check that your content has been imported from your original instance, including Jobs, Templates, Inventories, Credentials, and Users. You now have a new instance of Ansible Tower with all the Ansible content from your original instance. You will upgrade this new instance to Ansible Automation Platform 2.4 so that you keep all your data without overwriting your original instance. 4.5. Upgrading to Ansible Automation Platform 2.4 To upgrade your instance of Ansible Tower to Ansible Automation Platform 2.4, copy the inventory file from your original Tower instance to your new Tower instance and run the installer. The Red Hat Ansible Automation Platform installer detects a pre-2.4 and offers an upgraded inventory file to continue with the upgrade process: Download the latest installer for Red Hat Ansible Automation Platform from the Red Hat Ansible Automation Platform download page. Extract the files: USD tar xvzf ansible-automation-platform-setup- <latest_version >.tar.gz Navigate into your Ansible Automation Platform installation directory: USD cd ansible-automation-platform-setup- <latest_version> / Copy the inventory file from your original instance into the directory of the latest installer: USD cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup- <latest_version> Run the setup.sh script: USD ./setup.sh The setup script pauses and indicates that a "pre-2.x" inventory file was detected, but offers a new file called inventory.new.ini allowing you to continue to upgrade your original instance. Open inventory.new.ini with a text editor. Note By running the setup script, the Installer modified a few fields from your original inventory file, such as renaming [tower] to [automationcontroller]. Update the newly generated inventory.new.ini file to configure your automation mesh by assigning relevant variables, nodes, and relevant node-to-node peer connections: Note The design of your automation mesh topology depends on the automation needs of your environment. It is beyond the scope of this document to provide designs for all possible scenarios. The following is one example automation mesh design. Example inventory file with a standard control plane consisting of three nodes utilizing hop nodes: 1 Specifies a control node that runs project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes. 2 Specifies peer relationships for node-to-node connections in the [execution_nodes] group. 3 Specifies hop nodes that route traffic to other execution nodes. Hop nodes cannot execute automation. Import or generate a automation hub API token. Import an existing API token with the automationhub_api_token flag: automationhub_api_token=<api_token> Generate a new API token, and invalidate any existing tokens, by setting the generate_automationhub_token flag to True : generate_automationhub_token=True Once you have finished configuring your inventory.new.ini for automation mesh, run the setup script using inventory.new.ini : USD ./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass Once the installation completes, verify that your Ansible Automation Platform has been installed successfully by logging in to the Ansible Automation Platform dashboard UI across all automation controller nodes. Additional resources For general information about using the Ansible Automation Platform installer, see the Red Hat Ansible Automation Platform installation guide . 4.6. Configuring your upgraded Ansible Automation Platform 4.6.1. Configuring automation controller instance groups After upgrading your Red Hat Ansible Automation Platform instance, associate your original instances to its corresponding instance groups by configuring settings in the automation controller UI: Log in to the new Controller instance. Content from old instance, such as credentials, jobs, inventories should now be visible on your Controller instance. Navigate to Administration Instance Groups . Associate execution nodes by clicking on an instance group, then click the Instances tab. Click Associate . Select the node(s) to associate to this instance group, then click Save . You can also modify the default instance to disassociate your new execution nodes.
[ "dnf install chrony --assumeyes", "Use public servers from the pool.ntp.org project. Please consider joining the pool (http://www.pool.ntp.org/join.html). server <ntp-server-address> iburst", "systemctl --now enable chronyd.service", "systemctl status chronyd.service", "subscription-manager list --consumed", "useradd ansible", "passwd ansible 1 Changing password for ansible. Old Password: New Password: Retype New Password:", "ssh-keygen -t rsa", "echo \"ansible ALL=(ALL) NOPASSWD:ALL\" | sudo tee -a /etc/sudoers.d/ansible", "ssh-copy-id [email protected]", "dnf install firewalld --assumeyes", "systemctl start firewalld", "systemctl enable --now firewalld", "sudo systemctl status firewalld", "sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp", "sudo firewall-cmd --reload", "sudo firewall-cmd --list-ports", "./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_compression=True' @credentials.yml -b 1 2", "./setup.sh -r -e 'restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz' -e 'use_compression=True' -e @credentials.yml -r -- --ask-vault-pass 1 2 3", "tar xvzf ansible-automation-platform-setup- <latest_version >.tar.gz", "cd ansible-automation-platform-setup- <latest_version> /", "cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup- <latest_version>", "./setup.sh", "[automationcontroller] control-plane-1.example.com control-plane-2.example.com control-plane-3.example.com [automationcontroller:vars] node_type=control 1 peers=execution_nodes 2 [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com peers=execution-node-3.example.com execution-node-3.example.com peers=execution-node-4.example.com execution-node-4.example.com peers=execution-node-5.example.com node_type=hop execution-node-5.example.com peers=execution-node-6.example.com node_type=hop 3 execution-node-6.example.com peers=execution-node-7.example.com execution-node-7.example.com [execution_nodes:vars] node_type=execution", "automationhub_api_token=<api_token>", "generate_automationhub_token=True", "./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/migrate-isolated-execution-nodes
Chapter 2. Deploying OpenShift Data Foundation on Microsoft Azure
Chapter 2. Deploying OpenShift Data Foundation on Microsoft Azure You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Microsoft Azure installer-provisioned infrastructure (IPI) (type: managed-csi ) that enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . If you want to use Azure Vault [Technology preview] as the key management service provider, make sure to set up client authetication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to managed-csi . Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault [Technology preview] For information about setting up client authentication and fetching the client credentials in Azure platform, see the Prerequisites section of this procedure. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_microsoft_azure/deploying-openshift-data-foundation-on-microsoft-azure_azure
Chapter 12. Provisioning [metal3.io/v1alpha1]
Chapter 12. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ProvisioningSpec defines the desired state of Provisioning status object ProvisioningStatus defines the observed state of Provisioning 12.1.1. .spec Description ProvisioningSpec defines the desired state of Provisioning Type object Property Type Description additionalNTPServers array (string) AdditionalNTPServers is a list of NTP Servers to be used by the provisioning service bootIsoSource string BootIsoSource provides a way to set the location where the iso image to boot the nodes will be served from. By default the boot iso image is cached locally and served from the Provisioning service (Ironic) nodes using an auxiliary httpd server. If the boot iso image is already served by an httpd server, setting this option to http allows to directly provide the image from there; in this case, the network (either internal or external) where the httpd server that hosts the boot iso is needs to be accessible by the metal3 pod. disableVirtualMediaTLS boolean DisableVirtualMediaTLS turns off TLS on the virtual media server, which may be required for hardware that cannot accept HTTPS links. preProvisioningOSDownloadURLs object PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. provisioningDHCPExternal boolean ProvisioningDHCPExternal indicates whether the DHCP server for IP addresses in the provisioning DHCP range is present within the metal3 cluster or external to it. This field is being deprecated in favor of provisioningNetwork. provisioningDHCPRange string ProvisioningDHCPRange needs to be interpreted along with ProvisioningDHCPExternal. If the value of provisioningDHCPExternal is set to False, then ProvisioningDHCPRange represents the range of IP addresses that the DHCP server running within the metal3 cluster can use while provisioning baremetal servers. If the value of ProvisioningDHCPExternal is set to True, then the value of ProvisioningDHCPRange will be ignored. When the value of ProvisioningDHCPExternal is set to False, indicating an internal DHCP server and the value of ProvisioningDHCPRange is not set, then the DHCP range is taken to be the default range which goes from .10 to .100 of the ProvisioningNetworkCIDR. This is the only value in all of the Provisioning configuration that can be changed after the installer has created the CR. This value needs to be two comma sererated IP addresses within the ProvisioningNetworkCIDR where the 1st address represents the start of the range and the 2nd address represents the last usable address in the range. provisioningDNS boolean ProvisioningDNS allows sending the DNS information via DHCP on the provisionig network. It is off by default since the Provisioning service itself (Ironic) does not require DNS, but it may be useful for layered products (e.g. ZTP). provisioningIP string ProvisioningIP is the IP address assigned to the provisioningInterface of the baremetal server. This IP address should be within the provisioning subnet, and outside of the DHCP range. provisioningInterface string ProvisioningInterface is the name of the network interface on a baremetal server to the provisioning network. It can have values like eth1 or ens3. provisioningMacAddresses array (string) ProvisioningMacAddresses is a list of mac addresses of network interfaces on a baremetal server to the provisioning network. Use this instead of ProvisioningInterface to allow interfaces of different names. If not provided it will be populated by the BMH.Spec.BootMacAddress of each master. provisioningNetwork string ProvisioningNetwork provides a way to indicate the state of the underlying network configuration for the provisioning network. This field can have one of the following values - Managed - when the provisioning network is completely managed by the Baremetal IPI solution. Unmanaged - when the provsioning network is present and used but the user is responsible for managing DHCP. Virtual media provisioning is recommended but PXE is still available if required. Disabled - when the provisioning network is fully disabled. User can bring up the baremetal cluster using virtual media or assisted installation. If using metal3 for power management, BMCs must be accessible from the machine networks. User should provide two IPs on the external network that would be used for provisioning services. provisioningNetworkCIDR string ProvisioningNetworkCIDR is the network on which the baremetal nodes are provisioned. The provisioningIP and the IPs in the dhcpRange all come from within this network. When using IPv6 and in a network managed by the Baremetal IPI solution this cannot be a network larger than a /64. provisioningOSDownloadURL string ProvisioningOSDownloadURL is the location from which the OS Image used to boot baremetal host machines can be downloaded by the metal3 cluster. virtualMediaViaExternalNetwork boolean VirtualMediaViaExternalNetwork flag when set to "true" allows for workers to boot via Virtual Media and contact metal3 over the External Network. When the flag is set to "false" (which is the default), virtual media deployments can still happen based on the configuration specified in the ProvisioningNetwork i.e when in Disabled mode, over the External Network and over Provisioning Network when in Managed mode. PXE deployments will always use the Provisioning Network and will not be affected by this flag. watchAllNamespaces boolean WatchAllNamespaces provides a way to explicitly allow use of this Provisioning configuration across all Namespaces. It is an optional configuration which defaults to false and in that state will be used to provision baremetal hosts in only the openshift-machine-api namespace. When set to true, this provisioning configuration would be used for baremetal hosts across all namespaces. 12.1.2. .spec.preProvisioningOSDownloadURLs Description PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. Type object Property Type Description initramfsURL string InitramfsURL Image URL to be used for PXE deployments isoURL string IsoURL Image URL to be used for Live ISO deployments kernelURL string KernelURL is an Image URL to be used for PXE deployments rootfsURL string RootfsURL Image URL to be used for PXE deployments 12.1.3. .status Description ProvisioningStatus defines the observed state of Provisioning Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 12.1.4. .status.conditions Description conditions is a list of conditions and their status Type array 12.1.5. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 12.1.6. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 12.1.7. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 12.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/provisionings DELETE : delete collection of Provisioning GET : list objects of kind Provisioning POST : create a Provisioning /apis/metal3.io/v1alpha1/provisionings/{name} DELETE : delete a Provisioning GET : read the specified Provisioning PATCH : partially update the specified Provisioning PUT : replace the specified Provisioning /apis/metal3.io/v1alpha1/provisionings/{name}/status GET : read status of the specified Provisioning PATCH : partially update status of the specified Provisioning PUT : replace status of the specified Provisioning 12.2.1. /apis/metal3.io/v1alpha1/provisionings HTTP method DELETE Description delete collection of Provisioning Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Provisioning Table 12.2. HTTP responses HTTP code Reponse body 200 - OK ProvisioningList schema 401 - Unauthorized Empty HTTP method POST Description create a Provisioning Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body Provisioning schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 202 - Accepted Provisioning schema 401 - Unauthorized Empty 12.2.2. /apis/metal3.io/v1alpha1/provisionings/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the Provisioning HTTP method DELETE Description delete a Provisioning Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Provisioning Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Provisioning Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Provisioning Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body Provisioning schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty 12.2.3. /apis/metal3.io/v1alpha1/provisionings/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the Provisioning HTTP method GET Description read status of the specified Provisioning Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Provisioning Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Provisioning Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Provisioning schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/provisioning-metal3-io-v1alpha1
Chapter 3. Red Hat build of OpenJDK 8.0.412 release notes
Chapter 3. Red Hat build of OpenJDK 8.0.412 release notes The latest Red Hat build of OpenJDK 8 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 8 releases. Note For all the other changes and security fixes, see OpenJDK 8u412 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that Red Hat build of OpenJDK 8.0.412 provides: Kerberos 5 replay cache interoperability with MIT krb5-1.15 In Red Hat build of OpenJDK 8.0.412, the Kerberos 5 replay cache file ( rcache ) uses the SHA256 algorithm. This supersedes the behavior in releases where rcache used the MD5 algorithm. The Massachusetts Institute of Technology (MIT) Kerberos 5 Release 1.15 (krb5-1.15) also uses the SHA256 algorithm, which is interoperable with earlier releases of MIT krb5. If you want to continue using the MD5 algorithm, ensure that the new system property jdk.krb5.rcache.useMD5 is set to true . The MD5 algorithm is useful in the following situations: If your system has a coarse clock and depends on hash values in replay attack detection If your system needs to interoperate with the rcache files in older OpenJDK releases See JDK-8168518 (JDK Bug System) . SystemTray.isSupported() method returns false on most Linux desktops In Red Hat build of OpenJDK 8.0.412, the java.awt.SystemTray.isSupported() method returns false on systems that do not support the SystemTray API correctly. This enhancement is in accordance with the SystemTray API specification. The SystemTray API is used to interact with the taskbar in the system desktop to provide notifications. SystemTray might also include an icon representing an application. Due to an underlying platform issue, GNOME desktop support for taskbar icons has not worked correctly for several years. This platform issue affects the JDK's ability to provide SystemTray support on GNOME desktops. This issue typically affects systems that use GNOME Shell 44 or earlier. Note Because the lack of correct SystemTray support is a long-standing issue on some systems, this API enhancement to return false on affected systems is likely to have a minimal impact on users. See JDK-8322750 (JDK Bug System) . Certainly R1 and E1 root certificates added In Red Hat build of OpenJDK 8.0.412, the cacerts truststore includes two Certainly root certificates: Certificate 1 Name: Certainly Alias name: certainlyrootr1 Distinguished name: CN=Certainly Root R1, O=Certainly, C=US Certificate 2 Name: Certainly Alias name: certainlyroote1 Distinguished name: CN=Certainly Root E1, O=Certainly, C=US See JDK-8321408 (JDK Bug System) .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.412/openjdk-80412-release-notes_openjdk
22.8. User Certificates
22.8. User Certificates For information on user certificates, see Chapter 24, Managing Certificates for Users, Hosts, and Services .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/user-certificates-management
8.128. linuxptp
8.128. linuxptp 8.128.1. RHBA-2014:1491 - linuxptp bug fix and enhancement update Updated linuxptp packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Linux PTP project is a software implementation of the Precision Time Protocol (PTP) according to IEEE standard 1588 for Linux. These packages provide a robust implementation of the standard and use the most relevant and modern Application Programming Interfaces (API) offered by the Linux kernel. Supporting legacy APIs and other platforms is not a goal. The notable bug fixes and enhancements are: * The ptp4l application can be configured to select the delay mechanism automatically. However, this configuration did not work with the P2P delay mechanism so that the delay timer was not reset and the utility did not make any peer delay measurements. This update provides a patch to address this bug and ptp4l now correctly measures the peer delay in the described scenario. (BZ# 1011022 ) * Previously, the measured network delay was processed with a moving average algorithm, which is sensitive to outliers. This could for example negatively affect the time of recovery from an external clock step. This update adds a support for median filtering of the measured path delay. As a result, the algorithm that is used to process the measured delay can now be configured. The median filter, which is less sensitive to outliers, is set by default. (BZ# 1016356 ) * When the phc2sys utility is used with a Pulse Per Second (PPS) device and the corresponding network interface or Precision Time Protocol (PTP) clock is not specified with the "-i" or "-s" option, the user has to enable the device manually by running the "echo 1 > /sys/class/ptp/ptp0/pps_enable" command before phc2sys starts. When the device is not enabled before phc2sys starts, the "failed to fetch PPS: Connection timed out" error is returned. However, this requirement was not properly documented, which could confuse the users. With this update, this information has been added to the phc2sys(8) manual page. (BZ# 1019121 ) In addition, this update adds the linuxptp packages to the PowerPC version of Red Hat Enterprise Linux 6. (BZ# 1095400 ) Note The linuxptp packages have been upgraded to upstream version 1.4, which provides a number of bug fixes and enhancements over the version. (BZ# 1067502 ) The notable bug fixes and enhancements are: * The ptp4l application can be configured to select the delay mechanism automatically. However, this configuration did not work with the P2P delay mechanism so that the delay timer was not reset and the utility did not make any peer delay measurements. This update provides a patch to address this bug and ptp4l now correctly measures the peer delay in the described scenario. (BZ#1011022) * Previously, the measured network delay was processed with a moving average algorithm, which is sensitive to outliers. This could for example negatively affect the time of recovery from an external clock step. This update adds a support for median filtering of the measured path delay. As a result, the algorithm that is used to process the measured delay can now be configured. The median filter, which is less sensitive to outliers, is set by default. (BZ#1016356) * When the phc2sys utility is used with a Pulse Per Second (PPS) device and the corresponding network interface or Precision Time Protocol (PTP) clock is not specified with the "-i" or "-s" option, the user has to enable the device manually by running the "echo 1 > /sys/class/ptp/ptp0/pps_enable" command before phc2sys starts. When the device is not enabled before phc2sys starts, the "failed to fetch PPS: Connection timed out" error is returned. However, this requirement was not properly documented, which could confuse the users. With this update, this information has been added to the phc2sys(8) manual page. (BZ#1019121) In addition, this update adds the linuxptp packages to the PowerPC version of Red Hat Enterprise Linux 6. (BZ#1095400) Bug Fixes BZ# 1011022 The ptp4l application can be configured to select the delay mechanism automatically. However, this configuration did not work with the P2P delay mechanism so that the delay timer was not reset and the utility did not make any peer delay measurements. This update provides a patch to address this bug and ptp4l now correctly measures the peer delay in the described scenario. BZ# 1016356 Previously, the measured network delay was processed with a moving average algorithm, which is sensitive to outliers. This could for example negatively affect the time of recovery from an external clock step. This update adds a support for median filtering of the measured path delay. As a result, the algorithm that is used to process the measured delay can now be configured. The median filter, which is less sensitive to outliers, is set by default. BZ# 1019121 When the phc2sys utility is used with a Pulse Per Second (PPS) device and the corresponding network interface or Precision Time Protocol (PTP) clock is not specified with the "-i" or "-s" option, the user has to enable the device manually by running the "echo 1 > /sys/class/ptp/ptp0/pps_enable" command before phc2sys starts. When the device is not enabled before phc2sys starts, the "failed to fetch PPS: Connection timed out" error is returned. However, this requirement was not properly documented, which could confuse the users. With this update, this information has been added to the phc2sys(8) manual page. The linuxptp packages have been upgraded to upstream version 1.4, which provides a number of bug fixes and enhancements over the version. (BZ#1067502) The notable bug fixes and enhancements are: In addition, this update adds the linuxptp packages to the PowerPC version of Red Hat Enterprise Linux 6. (BZ#1095400) Users of linuxptp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/linuxptp
Advanced Overcloud Customization
Advanced Overcloud Customization Red Hat OpenStack Platform 16.2 Methods for configuring advanced features using Red Hat OpenStack Platform director OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/index
Chapter 49. Mask Fields Action
Chapter 49. Mask Fields Action Mask fields with a constant value in the message in transit 49.1. Configuration Options The following table summarizes the configuration options available for the mask-field-action Kamelet: Property Name Description Type Default Example fields * Fields Comma separated list of fields to mask string replacement * Replacement Replacement for the fields to be masked string Note Fields marked with an asterisk (*) are mandatory. 49.2. Dependencies At runtime, the mask-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:jackson camel:kamelet camel:core 49.3. Usage This section describes how you can use the mask-field-action . 49.3.1. Knative Action You can use the mask-field-action Kamelet as an intermediate step in a Knative binding. mask-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: "The Fields" replacement: "The Replacement" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 49.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 49.3.1.2. Procedure for using the cluster CLI Save the mask-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f mask-field-action-binding.yaml 49.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 49.3.2. Kafka Action You can use the mask-field-action Kamelet as an intermediate step in a Kafka binding. mask-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: "The Fields" replacement: "The Replacement" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 49.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 49.3.2.2. Procedure for using the cluster CLI Save the mask-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f mask-field-action-binding.yaml 49.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step mask-field-action -p "step-0.fields=The Fields" -p "step-0.replacement=The Replacement" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 49.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mask-field-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: \"The Fields\" replacement: \"The Replacement\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f mask-field-action-binding.yaml", "kamel bind timer-source?message=Hello --step mask-field-action -p \"step-0.fields=The Fields\" -p \"step-0.replacement=The Replacement\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mask-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mask-field-action properties: fields: \"The Fields\" replacement: \"The Replacement\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f mask-field-action-binding.yaml", "kamel bind timer-source?message=Hello --step mask-field-action -p \"step-0.fields=The Fields\" -p \"step-0.replacement=The Replacement\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/mask-field-action
Chapter 10. AWS Simple Notification System (SNS)
Chapter 10. AWS Simple Notification System (SNS) Only producer is supported The AWS2 SNS component allows messages to be sent to an Amazon Simple Notification Topic. The implementation of the Amazon API is provided by the AWS SDK . Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon SNS. More information is available at Amazon SNS . 10.1. Dependencies When using aws2-sns with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sns-starter</artifactId> </dependency> 10.2. URI Format The topic will be created if they don't already exists. You can append query options to the URI in the following format, ?options=value&option2=value&... 10.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 10.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 10.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 10.4. Component Options The AWS Simple Notification System (SNS) component supports 24 options, which are listed below. Name Description Default Type amazonSNSClient (producer) Autowired To use the AmazonSNS as the client. SnsClient autoCreateTopic (producer) Setting the autocreation of the topic. false boolean configuration (producer) Component configuration. Sns2Configuration kmsMasterKeyId (producer) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageDeduplicationIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values: useExchangeId useContentBasedDeduplication useExchangeId String messageGroupIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values: useConstant useExchangeId usePropertyValue String messageStructure (producer) The message structure to use such as json. String overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean policy (producer) The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String proxyHost (producer) To define a proxy host when instantiating the SNS client. String proxyPort (producer) To define a proxy port when instantiating the SNS client. Integer proxyProtocol (producer) To define a proxy protocol when instantiating the SNS client. Enum values: HTTP HTTPS HTTPS Protocol queueUrl (producer) The queueUrl to subscribe to. String region (producer) The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String serverSideEncryptionEnabled (producer) Define if Server Side Encryption is enabled or not on the topic. false boolean subject (producer) The subject which is used if the message header 'CamelAwsSnsSubject' is not present. String subscribeSNStoSQS (producer) Define if the subscription between SNS Topic and SQS must be done or not. false boolean trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 10.5. Endpoint Options The AWS Simple Notification System (SNS) endpoint is configured using URI syntax: with the following path and query parameters: 10.5.1. Path Parameters (1 parameters) Name Description Default Type topicNameOrArn (producer) Required Topic name or ARN. String 10.5.2. Query Parameters (23 parameters) Name Description Default Type amazonSNSClient (producer) Autowired To use the AmazonSNS as the client. SnsClient autoCreateTopic (producer) Setting the autocreation of the topic. false boolean headerFilterStrategy (producer) To use a custom HeaderFilterStrategy to map headers to/from Camel. HeaderFilterStrategy kmsMasterKeyId (producer) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageDeduplicationIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values: useExchangeId useContentBasedDeduplication useExchangeId String messageGroupIdStrategy (producer) Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values: useConstant useExchangeId usePropertyValue String messageStructure (producer) The message structure to use such as json. String overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean policy (producer) The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String proxyHost (producer) To define a proxy host when instantiating the SNS client. String proxyPort (producer) To define a proxy port when instantiating the SNS client. Integer proxyProtocol (producer) To define a proxy protocol when instantiating the SNS client. Enum values: HTTP HTTPS HTTPS Protocol queueUrl (producer) The queueUrl to subscribe to. String region (producer) The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String serverSideEncryptionEnabled (producer) Define if Server Side Encryption is enabled or not on the topic. false boolean subject (producer) The subject which is used if the message header 'CamelAwsSnsSubject' is not present. String subscribeSNStoSQS (producer) Define if the subscription between SNS Topic and SQS must be done or not. false boolean trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required SNS component options You have to provide the amazonSNSClient in the Registry or your accessKey and secretKey to access the Amazon's SNS . 10.6. Usage 10.6.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation . 10.6.2. Message headers evaluated by the SNS producer Header Type Description CamelAwsSnsSubject String The Amazon SNS message subject. If not set, the subject from the SnsConfiguration is used. 10.6.3. Message headers set by the SNS producer Header Type Description CamelAwsSnsMessageId String The Amazon SNS message ID. 10.6.4. Advanced AmazonSNS configuration If you need more control over the SnsClient instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws2-sns://MyTopic?amazonSNSClient=#client"); The #client refers to a AmazonSNS in the Registry. 10.6.5. Create a subscription between an AWS SNS Topic and an AWS SQS Queue You can create a subscription of an SQS Queue to an SNS Topic in this way: from("direct:start") .to("aws2-sns://test-camel-sns1?amazonSNSClient=#amazonSNSClient&subscribeSNStoSQS=true&queueUrl=https://sqs.eu-central-1.amazonaws.com/780410022472/test-camel"); The #amazonSNSClient refers to a SnsClient in the Registry. By specifying subscribeSNStoSQS to true and a queueUrl of an existing SQS Queue, you'll be able to subscribe your SQS Queue to your SNS Topic. At this point you can consume messages coming from SNS Topic through your SQS Queue from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5") .to(...); 10.7. Topic Autocreation With the option autoCreateTopic users are able to avoid the autocreation of an SNS Topic in case it doesn't exist. The default for this option is true . If set to false any operation on a not-existent topic in AWS won't be successful and an error will be returned. 10.8. SNS FIFO SNS FIFO are supported. While creating the SQS queue you will subscribe to the SNS topic there is an important point to remember, you'll need to make possible for the SNS Topic to send message to the SQS Queue. Example Suppose you created an SNS FIFO Topic called Order.fifo and an SQS Queue called QueueSub.fifo . In the access Policy of the QueueSub.fifo you should submit something like this: { "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__owner_statement", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::780560123482:root" }, "Action": "SQS:*", "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo" }, { "Effect": "Allow", "Principal": { "Service": "sns.amazonaws.com" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:sns:eu-west-1:780410022472:Order.fifo" } } } ] } This is a critical step to make the subscription work correctly. 10.8.1. SNS Fifo Topic Message group Id Strategy and message Deduplication Id Strategy When sending something to the FIFO topic you'll need to always set up a message group Id strategy. If the content-based message deduplication has been enabled on the SNS Fifo topic, where won't be the need of setting a message deduplication id strategy, otherwise you'll have to set it. 10.9. Examples 10.9.1. Producer Examples Sending to a topic from("direct:start") .to("aws2-sns://camel-topic?subject=The+subject+message&autoCreateTopic=true"); 10.10. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sns</artifactId> <version>USD{camel-version}</version> </dependency> where {camel-version} must be replaced by the actual version of Camel. 10.11. Spring Boot Auto-Configuration The component supports 25 options, which are listed below. Name Description Default Type camel.component.aws2-sns.access-key Amazon AWS Access Key. String camel.component.aws2-sns.amazon-s-n-s-client To use the AmazonSNS as the client. The option is a software.amazon.awssdk.services.sns.SnsClient type. SnsClient camel.component.aws2-sns.auto-create-topic Setting the autocreation of the topic. false Boolean camel.component.aws2-sns.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-sns.configuration Component configuration. The option is a org.apache.camel.component.aws2.sns.Sns2Configuration type. Sns2Configuration camel.component.aws2-sns.enabled Whether to enable auto configuration of the aws2-sns component. This is enabled by default. Boolean camel.component.aws2-sns.kms-master-key-id The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. String camel.component.aws2-sns.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-sns.message-deduplication-id-strategy Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. useExchangeId String camel.component.aws2-sns.message-group-id-strategy Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. String camel.component.aws2-sns.message-structure The message structure to use such as json. String camel.component.aws2-sns.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-sns.policy The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.aws2-sns.proxy-host To define a proxy host when instantiating the SNS client. String camel.component.aws2-sns.proxy-port To define a proxy port when instantiating the SNS client. Integer camel.component.aws2-sns.proxy-protocol To define a proxy protocol when instantiating the SNS client. Protocol camel.component.aws2-sns.queue-url The queueUrl to subscribe to. String camel.component.aws2-sns.region The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-sns.secret-key Amazon AWS Secret Key. String camel.component.aws2-sns.server-side-encryption-enabled Define if Server Side Encryption is enabled or not on the topic. false Boolean camel.component.aws2-sns.subject The subject which is used if the message header 'CamelAwsSnsSubject' is not present. String camel.component.aws2-sns.subscribe-s-n-sto-s-q-s Define if the subscription between SNS Topic and SQS must be done or not. false Boolean camel.component.aws2-sns.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-sns.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-sns.use-default-credentials-provider Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sns-starter</artifactId> </dependency>", "aws2-sns://topicNameOrArn[?options]", "aws2-sns:topicNameOrArn", "from(\"direct:start\") .to(\"aws2-sns://MyTopic?amazonSNSClient=#client\");", "from(\"direct:start\") .to(\"aws2-sns://test-camel-sns1?amazonSNSClient=#amazonSNSClient&subscribeSNStoSQS=true&queueUrl=https://sqs.eu-central-1.amazonaws.com/780410022472/test-camel\");", "from(\"aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5\") .to(...);", "{ \"Version\": \"2008-10-17\", \"Id\": \"__default_policy_ID\", \"Statement\": [ { \"Sid\": \"__owner_statement\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::780560123482:root\" }, \"Action\": \"SQS:*\", \"Resource\": \"arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo\" }, { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"sns.amazonaws.com\" }, \"Action\": \"SQS:SendMessage\", \"Resource\": \"arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo\", \"Condition\": { \"ArnLike\": { \"aws:SourceArn\": \"arn:aws:sns:eu-west-1:780410022472:Order.fifo\" } } } ] }", "from(\"direct:start\") .to(\"aws2-sns://camel-topic?subject=The+subject+message&autoCreateTopic=true\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sns</artifactId> <version>USD{camel-version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-aws2-sns-component-starter
Chapter 4. Configuration Hooks
Chapter 4. Configuration Hooks The configuration hooks provide a method to inject your own configuration functions into the Overcloud deployment process. This includes hooks for injecting custom configuration before and after the main Overcloud services configuration and hook for modifying and including Puppet-based configuration. 4.1. First Boot: Customizing First Boot Configuration The director provides a mechanism to perform configuration on all nodes upon the initial creation of the Overcloud. The director achieves this through cloud-init , which you can call using the OS::TripleO::NodeUserData resource type. In this example, update the nameserver with a custom IP address on all nodes. First, create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a specific nameserver. You can use the OS::TripleO::MultipartMime resource type to send the configuration script. Create an environment file ( /home/stack/templates/firstboot.yaml ) that registers your Heat template as the OS::TripleO::NodeUserData resource type. To add the first boot configuration, add the environment file to the stack along with your other environment files when first creating the Overcloud. For example: The -e applies the environment file to the Overcloud stack. This adds the configuration to all nodes when they are first created and boot for the first time. Subsequent inclusions of these templates, such as updating the Overcloud stack, does not run these scripts. Important You can only register the OS::TripleO::NodeUserData to one Heat template. Subsequent usage overrides the Heat template to use. 4.2. Pre-Configuration: Customizing Specific Overcloud Roles Important versions of this document used the OS::TripleO::Tasks::*PreConfig resources to provide pre-configuration hooks on a per role basis. The director's Heat template collection requires dedicated use of these hooks, which means you should not use them for custom use. Instead, use the OS::TripleO::*ExtraConfigPre hooks outlined below. The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a set of hooks to provide custom configuration for specific node roles after the first boot completes and before the core configuration begins. These hooks include: OS::TripleO::ControllerExtraConfigPre Additional configuration applied to Controller nodes before the core Puppet configuration. OS::TripleO::ComputeExtraConfigPre Additional configuration applied to Compute nodes before the core Puppet configuration. OS::TripleO::CephStorageExtraConfigPre Additional configuration applied to Ceph Storage nodes before the core Puppet configuration. OS::TripleO::ObjectStorageExtraConfigPre Additional configuration applied to Object Storage nodes before the core Puppet configuration. OS::TripleO::BlockStorageExtraConfigPre Additional configuration applied to Block Storage nodes before the core Puppet configuration. OS::TripleO::[ROLE]ExtraConfigPre Additional configuration applied to custom nodes before the core Puppet configuration. Replace [ROLE] with the composable role name. In this example, you first create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to write to a node's resolv.conf with a variable nameserver. In this example, the resources section contains the following parameters: CustomExtraConfigPre This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeploymentPre This executes a software configuration, which is the software configuration from the CustomExtraConfigPre resource. Note the following: The config parameter makes a reference to the CustomExtraConfigPre resource so Heat knows what configuration to apply. The server parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . input_values contains a parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. Create an environment file ( /home/stack/templates/pre_config.yaml ) that registers your Heat template to the role-based resource type. For example, to apply only to Controller nodes, use the ControllerExtraConfigPre hook: To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all Controller nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates. Important You can only register each resource to only one Heat template per hook. Subsequent usage overrides the Heat template to use. 4.3. Pre-Configuration: Customizing All Overcloud Roles The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a hook to configure all node types after the first boot completes and before the core configuration begins: OS::TripleO::NodeExtraConfig Additional configuration applied to all nodes roles before the core Puppet configuration. In this example, create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a variable nameserver. In this example, the resources section contains the following parameters: CustomExtraConfigPre This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeploymentPre This executes a software configuration, which is the software configuration from the CustomExtraConfigPre resource. Note the following: The config parameter makes a reference to the CustomExtraConfigPre resource so Heat knows what configuration to apply. The server parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, we only apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . The input_values parameter contains a sub-parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. , create an environment file ( /home/stack/templates/pre_config.yaml ) that registers your heat template as the OS::TripleO::NodeExtraConfig resource type. To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates. Important You can only register the OS::TripleO::NodeExtraConfig to only one Heat template. Subsequent usage overrides the Heat template to use. 4.4. Post-Configuration: Customizing All Overcloud Roles Important versions of this document used the OS::TripleO::Tasks::*PostConfig resources to provide post-configuration hooks on a per role basis. The director's Heat template collection requires dedicated use of these hooks, which means you should not use them for custom use. Instead, use the OS::TripleO::NodeExtraConfigPost hook outlined below. A situation might occur where you have completed the creation of your Overcloud but want to add additional configuration to all roles, either on initial creation or on a subsequent update of the Overcloud. In this case, you use the following post-configuration hook: OS::TripleO::NodeExtraConfigPost Additional configuration applied to all nodes roles after the core Puppet configuration. In this example, you first create a basic heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a variable nameserver. In this example, the resources section contains the following: CustomExtraConfig This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeployments This executes a software configuration, which is the software configuration from the CustomExtraConfig resource. Note the following: The config parameter makes a reference to the CustomExtraConfig resource so Heat knows what configuration to apply. The servers parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, we apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . input_values contains a parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. Create an environment file ( /home/stack/templates/post_config.yaml ) that registers your Heat template as the OS::TripleO::NodeExtraConfigPost: resource type. To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all nodes after the core configuration completes on either initial Overcloud creation or subsequent updates. Important You can only register the OS::TripleO::NodeExtraConfigPost to only one Heat template. Subsequent usage overrides the Heat template to use. 4.5. Puppet: Customizing Hieradata for Roles The Heat template collection contains a set of parameters to pass extra configuration to certain node types. These parameters save the configuration as hieradata for the node's Puppet configuration. These parameters are: ControllerExtraConfig Configuration to add to all Controller nodes. ComputeExtraConfig Configuration to add to all Compute nodes. BlockStorageExtraConfig Configuration to add to all Block Storage nodes. ObjectStorageExtraConfig Configuration to add to all Object Storage nodes. CephStorageExtraConfig Configuration to add to all Ceph Storage nodes. [ROLE]ExtraConfig Configuration to add to a composable role. Replace [ROLE] with the composable role name. ExtraConfig Configuration to add to all nodes. To add extra configuration to the post-deployment configuration process, create an environment file that contains these parameters in the parameter_defaults section. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese: Include this environment file when running openstack overcloud deploy . Important You can only define each parameter once. Subsequent usage overrides values. 4.6. Puppet: Customizing Hieradata for Individual Nodes You can set Puppet hieradata for individual nodes using the Heat template collection. To accomplish this, acquire the system UUID saved as part of the introspection data for a node: This outputs a system UUID. For example: Use this system UUID in an environment file that defines node-specific hieradata and registers the per_node.yaml template to a pre-configuration hook. For example: Include this environment file when running openstack overcloud deploy . The per_node.yaml template generates a set of heiradata files on nodes that correspond to each system UUID and contains the hieradata you defined. If a UUID is not defined, the resulting hieradata file is empty. In the example, the per_node.yaml template runs on all Compute nodes (as per the OS::TripleO::ComputeExtraConfigPre hook), but only the Compute node with system UUID F5055C6C-477F-47FB-AFE5-95C6928C407F receives hieradata. This provides a method of tailoring each node to specific requirements. For more information about NodeDataLookup, see Configuring Ceph Storage Cluster Setting in the Deploying an Overcloud with Containerized Red Hat Ceph guide. 4.7. Puppet: Applying Custom Manifests In certain circumstances, you might need to install and configure some additional components to your Overcloud nodes. You can achieve this with a custom Puppet manifest that applies to nodes after the main configuration completes. As a basic example, you might intend to install motd to each node. The process for accomplishing this is to first create a Heat template ( /home/stack/templates/custom_puppet_config.yaml ) that launches Puppet configuration. This includes the /home/stack/templates/motd.pp within the template and passes it to nodes for configuration. The motd.pp file itself contains the Puppet classes to install and configure motd . Create an environment file ( /home/stack/templates/puppet_post_config.yaml ) that registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type. Include this environment file along with your other environment files when creating or updating the Overcloud stack: This applies the configuration from motd.pp to all nodes in the Overcloud.
[ "heat_template_version: 2014-10-16 description: > Extra hostname configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: nameserver_config} nameserver_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash echo \"nameserver 192.168.1.1\" >> /etc/resolv.conf outputs: OS::stack_id: value: {get_resource: userdata}", "resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml", "openstack overcloud deploy --templates -e /home/stack/templates/firstboot.yaml", "heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: server: type: json nameserver_ip: type: string DeployIdentifier: type: string resources: CustomExtraConfigPre: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" > /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeploymentPre: type: OS::Heat::SoftwareDeployment properties: server: {get_param: server} config: {get_resource: CustomExtraConfigPre} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier} outputs: deploy_stdout: description: Deployment reference, used to trigger pre-deploy on changes value: {get_attr: [CustomExtraDeploymentPre, deploy_stdout]}", "resource_registry: OS::TripleO::ControllerExtraConfigPre: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1", "openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml", "heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: server: type: string nameserver_ip: type: string DeployIdentifier: type: string resources: CustomExtraConfigPre: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" >> /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeploymentPre: type: OS::Heat::SoftwareDeployment properties: server: {get_param: server} config: {get_resource: CustomExtraConfigPre} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier} outputs: deploy_stdout: description: Deployment reference, used to trigger pre-deploy on changes value: {get_attr: [CustomExtraDeploymentPre, deploy_stdout]}", "resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1", "openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml", "heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: servers: type: json nameserver_ip: type: string DeployIdentifier: type: string EndpointMap: default: {} type: json resources: CustomExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" >> /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeployments: type: OS::Heat::SoftwareDeploymentGroup properties: servers: {get_param: servers} config: {get_resource: CustomExtraConfig} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier}", "resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1", "openstack overcloud deploy --templates -e /home/stack/templates/post_config.yaml", "parameter_defaults: ComputeExtraConfig: nova::compute::reserved_host_memory: 1024 nova::compute::vnc_keymap: ja", "openstack baremetal introspection data save 9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14 | jq .extra.system.product.uuid", "\"F5055C6C-477F-47FB-AFE5-95C6928C407F\"", "resource_registry: OS::TripleO::ComputeExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: '{\"F5055C6C-477F-47FB-AFE5-95C6928C407F\": {\"nova::compute::vcpu_pin_set\": [ \"2\", \"3\" ]}}'", "heat_template_version: 2014-10-16 description: > Run Puppet extra configuration to set new MOTD parameters: servers: type: json resources: ExtraPuppetConfig: type: OS::Heat::SoftwareConfig properties: config: {get_file: motd.pp} group: puppet options: enable_hiera: True enable_facter: False ExtraPuppetDeployments: type: OS::Heat::SoftwareDeploymentGroup properties: config: {get_resource: ExtraPuppetConfig} servers: {get_param: servers}", "resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yaml", "openstack overcloud deploy --templates -e /home/stack/templates/puppet_post_config.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/chap-Configuration_Hooks
Post-installation configuration
Post-installation configuration OpenShift Container Platform 4.9 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <master_name> 1", "spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m", "oc describe mcp worker", "Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>", "Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3", "oc get machineconfigs", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m", "oc describe machineconfigs 01-master-kubelet", "Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\", "oc delete -f ./myconfig.yaml", "variant: openshift version: 4.9.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: \"chronyd.service\"", "oc create -f disable-chronyd.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1 ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1 ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1 ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1 ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "oc create -f ./99-worker-kargs-mpath.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1 ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1 ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1 ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1 ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF", "oc create -f 99-worker-realtime.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.22.1 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.22.1 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1", "oc debug node/ip-10-0-143-147.us-east-2.compute.internal", "Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux", "oc delete -f 99-worker-realtime.yaml", "variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s", "butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml", "oc apply -f 40-worker-custom-journald.yaml", "oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m", "oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit", "cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF", "oc create -f 80-extensions.yaml", "oc get machineconfig 80-worker-extensions", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m", "oc get node | grep worker", "NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1", "oc debug node/ip-10-0-169-2.us-east-2.compute.internal", "To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4", "butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>", "oc apply -f 98-worker-firmware-blob.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=large-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc get ctrcfg", "NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s", "oc get mc | grep container", "01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: \"-1\" 5", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: \"-1\"", "oc create -f <file_name>.yaml", "oc get ContainerRuntimeConfig", "NAME AGE overlay-size 3m19s", "oc get machineconfigs | grep containerrun", "99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s", "oc get mcp worker", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max'", "pids_limit = 2048 log_size_max = -1 log_level = \"debug\"", "sh-4.4# head -n 7 /etc/containers/storage.conf", "[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G", "oc apply -f overlaysize.yml", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"", "oc get machineconfigs", "99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s", "oc get mcp worker", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h", "head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"", "~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc adm cordon <node_name> oc adm drain <node_name>", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false policy: name: \"\"", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 . spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aescbc 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "I0907 08:43:12.171919 1 defragcontroller.go:198] etcd member \"ip- 10-0-191-150.example.redhat.com\" backend store fragmented: 39.33 %, dbSize: 349138944", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | grep -v operator", "sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | grep -v operator", "sudo mv /var/lib/etcd/ /tmp", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | grep -v operator", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "Unable to connect to the server: EOF", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "sudo rm -f /var/lib/ovn/etc/*.db", "oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s ovnkube-master-rm8kw 4/4 Running 0 47s ovnkube-master-zbqnh 4/4 Running 0 56s", "oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done", "oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3", "providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f", "annotations: machine.openshift.io/instance-state: running generation: 2", "resourceVersion: \"13291\" uid: a282eb70-40a2-4e89-8009-d05dd420d31a", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc login -u <cluster_admin> 1", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN-AVAILABLE SELECTOR another-project another-pdb 4 bar=foo test-project my-pdb 2 foo=bar", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: foo: bar", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: foo: bar", "oc create -f </path/to/file> -n <project_name>", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=USD(oc adm release info --image-for must-gather)", "get imagestreams -nopenshift", "oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12", "oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift", "oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift", "get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.9-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc adm cordon <node_name> oc adm drain <node_name>", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=large-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "spec: taints: - effect: NoExecute key: key1 value: value1 .", "spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 .", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master", "spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600", "oc adm taint nodes node1 key1=value1:NoSchedule", "oc adm taint nodes node1 key1=value1:NoExecute", "oc adm taint nodes node1 key2=value2:NoSchedule", "spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\"", "spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300", "spec: tolerations: - operator: \"Exists\"", "spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2", "spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master", "spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2", "spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600", "oc edit machineset <machineset>", "spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: dedicated value: groupName effect: NoSchedule", "spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: disktype value: ssd effect: PreferNoSchedule", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: . labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" .", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.9\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "vm.overcommit_memory = 1", "sysctl -a |grep panic", "vm.panic_on_oom = 0", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: 3 - \"true\"", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "quota.openshift.io/cluster-resource-override-enabled: \"false\"", "oc create -f <file-name>.yaml", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "openstack port show <cluster_name>-<cluster_ID>-ingress-port", "openstack floating ip set --port <ingress_port_ID> <apps_FIP>", "*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>", "<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4", "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8", "oc get storageclass", "NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.22.1 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.22.1", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6", "oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.9 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/", "update-ca-trust", "oc extract secret/pull-secret -n openshift-config --confirm --to=.", ".dockerconfigjson", "{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}},\"<registry>:<port>/<namespace>/\":{\"auth\":\"<token>\"}}}", "{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"quay.io\":{\"auth\":\"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"registry.connect.redhat.com\"{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==\",\"email\":\"[email protected]\"}, \"registry.redhat.io\":{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==\",\"email\":\"[email protected]\"}, \"registry.svc.ci.openshift.org\":{\"auth\":\"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV\"},\"my-registry:5000/my-namespace/\":{\"auth\":\"dXNlcm5hbWU6cGFzc3dvcmQ=\"}}}", "oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>", "oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'", "oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>", "oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'", "oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>", "oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64", "info: Mirroring 109 images to mirror.registry.com/ocp/release mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release", "oc image mirror <online_registry>/my/image:latest <mirror_registry>", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson", "oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config", "S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<config_map_name>\"}}}' --type=merge", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /var/lib/kubelet/config.json", "{\"auths\":{\"brew.registry.redhat.io\":{\"xx==\"},\"brewregistry.stage.redhat.io\":{\"auth\":\"xxx==\"},\"mirror.registry.com:443\":{\"auth\":\"xx=\"}}} 1", "sh-4.4# cd /etc/docker/certs.d/", "sh-4.4# ls", "image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443 1", "sh-4.4# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-release\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\" [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\"", "sh-4.4# exit", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.21.1+a620f50 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.21.1+a620f50 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.21.1+a620f50 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.21.1+a620f50 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.21.1+a620f50 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.21.1+a620f50", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"[email protected]\"}", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson", "oc get co insights", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d", "oc get imagecontentsourcepolicy", "NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h", "oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>", "oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0", "imagecontentsourcepolicy.operator.openshift.io \"mirror-ocp\" deleted imagecontentsourcepolicy.operator.openshift.io \"ocp4-index-0\" deleted imagecontentsourcepolicy.operator.openshift.io \"qe45-index-0\" deleted", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"", "ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3", "ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3", "ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3", "ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3", "ssh <user>@<node_ip_address>", "oc debug node/<node_name>", "sudo chzdev -e 0.0.8000 sudo chzdev -e 1000-1002 sude chzdev -e 4444 sudo chzdev -e 0.0.8000:0x500507680d760026:0x00bc000000000000", "ssh <user>@<node_ip_address>", "oc debug node/<node_name>", "sudo /sbin/mpathconf --enable", "sudo multipath", "sudo fdisk /dev/mapper/mpatha", "sudo multipath -II", "mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/post-installation_configuration/index
14.5.15. Using blockcommit to Shorten a Backing Chain
14.5.15. Using blockcommit to Shorten a Backing Chain This section demonstrates how to use virsh blockcommit to shorten a backing chain. For more background on backing chains, see Section 14.5.18, "Disk Image Management with Live Block Copy" . blockcommit copies data from one part of the chain down into a backing file, allowing you to pivot the rest of the chain in order to bypass the committed portions. For example, suppose this is the current state: Using blockcommit moves the contents of snap2 into snap1, allowing you to delete snap2 from the chain, making backups much quicker. Procedure 14.2. virsh blockcommit Run the following command: The contents of snap2 are moved into snap1, resulting in: base <- snap1 <- active . Snap2 is no longer valid and can be deleted Warning blockcommit will corrupt any file that depends on the -base option (other than files that depend on the -top option, as those files now point to the base). To prevent this, do not commit changes into files shared by more than one guest. The -verbose option allows the progress to be printed on the screen.
[ "base <- snap1 <- snap2 <- active .", "virsh blockcommit USDdom USDdisk -base snap1 -top snap2 -wait -verbose" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-Domain_Commands-Using_blockcommit_to_shorten_a_backing_chain
Chapter 3. Recreating the fapolicyd trust files when updating SAP HANA
Chapter 3. Recreating the fapolicyd trust files when updating SAP HANA Prerequisites The fapolicyd package is installed on your system. You have verified that there are no new executables in the SAP HANA software directories, so you do not accidentally add software from unknown sources. For more information, refer to Marking the SAP HANA files as trusted . Procedure Stop fapolicyd before performing the SAP HANA software update: Create a backup of the existing fapolicyd trust files /etc/fapolicyd/trust.d/hana and /etc/fapolicyd/trust.d/usr_sap , and then remove these files. Perform the SAP HANA software update. Repeat procedure section's step 2 of Marking the SAP HANA files as trusted , to recreate the fapolicyd trust files for SAP HANA. Start fapolicyd :
[ "systemctl stop fapolicyd", "systemctl start fapolicyd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_fapolicyd_to_allow_only_sap_hana_executables/proc_recreating_configuring-fapolicyd
Appendix A. Troubleshooting: General Guidelines
Appendix A. Troubleshooting: General Guidelines This appendix describes general steps for determining the root cause of a problem, for example by querying logs and service statuses. Note For lists of specific problems and their solutions, see Appendix B, Troubleshooting: Solutions to Specific Problems . What were you doing when you encountered the problem? Executing a command using the ipa utility Authenticating Using kinit Authenticating to the IdM web UI Authenticating with a Smart Card Starting a Service If you know which specific area of IdM is causing the problem, follow these links: DNS Replication If this guide does not help you find and fix the problem and you proceed to file a customer case, include any notable error output that you determined using these troubleshooting procedures in the case report. See also Contacting Red Hat Technical Support . A.1. Investigating Failures when Executing the ipa Utility Basic Troubleshooting Add the --verbose ( -v ) option to the command. This displays debug information. Add the -vv option to the command. This displays the JSON response and request. Advanced Troubleshooting Figure A.1, "The architecture of executing the ipa cert-show command" shows which components interact when the user uses the IdM command-line utility. Querying these components can help you investigate where the problem occurred and what caused it. Use the following utilities: host to check the DNS resolution of the IdM server or client ping to check if the IdM server is available iptables to check the current firewall configuration on the IdM server date to check the current time nc to try to connect to the required ports, as listed in Section 2.1.6, "Port Requirements" For details on using these utilities, see their man pages. Set the KRB5_TRACE environment variable to the /dev/stdout file to send trace-logging output to /dev/stdout : Review the Kerberos key distribution center (KDC) log: /var/log/krb5kdc.log . Review the Apache error log: Enable debug level on the server: Open the /etc/ipa/server.conf file, and add the debug=True option to the [global] section. Restart the httpd service: Run the command that failed again. Review the httpd error log on the server: /var/log/httpd/error_log . Run the command with the -vvv option to display the HTTP request and response. Review the Apache access log: /var/log/httpd/access_log . Review the logs for the Certificate System component: /var/log/pki/pki-ca-spawn. time_of_installation .log /var/log/pki/pki-tomcat/ca/debug /var/log/pki/pki-tomcat/ca/system /var/log/pki/pki-tomcat/ca/selftests.log Use the # journalctl -u [email protected] command to review the journal log. Review the Directory Server access log: /var/log/dirsrv/slapd- IPA-EXAMPLE-COM /access . Figure A.1. The architecture of executing the ipa cert-show command Related Information See Section C.2, "Identity Management Log Files and Directories" for descriptions of various Identity Management log files.
[ "KRB5_TRACE=/dev/stdout ipa cert-find", "systemctl restart httpd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-general
B.82. rsync
B.82. rsync B.82.1. RHSA-2011:0390 - Moderate: rsync security update An updated rsync package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. rsync is a program for synchronizing files over a network. CVE-2011-1097 A memory corruption flaw was found in the way the rsync client processed malformed file list data. If an rsync client used the "--recursive" and "--delete" options without the "--owner" option when connecting to a malicious rsync server, the malicious server could cause rsync on the client system to crash or, possibly, execute arbitrary code with the privileges of the user running rsync. Red Hat would like to thank Wayne Davison and Matt McCutchen for reporting this issue. Users of rsync should upgrade to this updated package, which contains a backported patch to resolve this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rsync
1.2. Power Management Basics
1.2. Power Management Basics Effective power management is built on the following principles: An idle CPU should only wake up when needed The Red Hat Enterprise Linux 5 kernel used a periodic timer for each CPU. This timer prevents the CPU from truly going idle, as it requires the CPU to process each timer event (which would happen every few milliseconds, depending on the setting), regardless of whether any process was running or not. A large part of effective power management involves reducing the frequency at which CPU wakeups are made. Because of this, the Linux kernel in Red Hat Enterprise Linux 6 eliminates the periodic timer: as a result, the idle state of a CPU is now tickless . This prevents the CPU from consuming unnecessary power when it is idle. However, benefits from this feature can be offset if your system has applications that create unnecessary timer events. Polling events (such as checks for volume changes, mouse movement, and the like) are examples of such events. Red Hat Enterprise Linux 6 includes tools with which you can identify and audit applications on the basis of their CPU usage. Refer to Chapter 2, Power management auditing and analysis for details. Unused hardware and devices should be disabled completely This is especially true for devices that have moving parts (for example, hard disks). In addition to this, some applications may leave an unused but enabled device "open"; when this occurs, the kernel assumes that the device is in use, which can prevent the device from going into a power saving state. Low activity should translate to low wattage In many cases, however, this depends on modern hardware and correct BIOS configuration. Older system components often do not have support for some of the new features that we now can support in Red Hat Enterprise Linux 6. Make sure that you are using the latest official firmware for your systems and that in the power management or device configuration sections of the BIOS the power management features are enabled. Some features to look for include: SpeedStep PowerNow! Cool'n'Quiet ACPI (C state) Smart If your hardware has support for these features and they are enabled in the BIOS, Red Hat Enterprise Linux 6 will use them by default. Different forms of CPU states and their effects Modern CPUs together with Advanced Configuration and Power Interface (ACPI) provide different power states. The three different states are: Sleep (C-states) Frequency (P-states) Heat output (T-states or "thermal states") A CPU running on the lowest sleep state possible consumes the least amount of watts, but it also takes considerably more time to wake it up from that state when needed. In very rare cases this can lead to the CPU having to wake up immediately every time it just went to sleep. This situation results in an effectively permanently busy CPU and loses some of the potential power saving if another state had been used. A turned off machine uses the least amount of power As obvious as this might sound, one of the best ways to actually save power is to turn off systems. For example, your company can develop a corporate culture focused on "green IT" awareness with a guideline to turn of machines during lunch break or when going home. You also might consolidate several physical servers into one bigger server and virtualize them using the virtualization technology we ship with Red Hat Enterprise Linux 6.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/basics
Chapter 4. Using SSL to protect connections to Red Hat Quay
Chapter 4. Using SSL to protect connections to Red Hat Quay 4.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . Note The following examples assume that you have configured the server hostname quay-server.example.com using DNS or another naming mechanism, such as adding an entry in your /etc/hosts file. For more information, see "Configuring port mapping for Red Hat Quay". 4.2. Creating a Certificate Authority Use the following procedure to create a Certificate Authority (CA). Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com 4.2.1. Signing the certificate Use the following procedure to sign the certificate. Procedure Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Create a configuration file openssl.cnf , specifying the server hostname, for example: openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 4.3. Configuring SSL/TLS using the command line interface Use the following procedure to configure SSL/TLS using the CLI. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key USDQUAY/config Change into the USDQUAY/config directory by entering the following command: USD cd USDQUAY/config Edit the config.yaml file and specify that you want Red Hat Quay to handle TLS/SSL: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop quay Restart the registry by entering the following command: 4.4. Configuring SSL/TLS using the Red Hat Quay UI Use the following procedure to configure SSL/TLS using the Red Hat Quay UI. To configure SSL/TLS using the command line interface, see "Configuring SSL/TLS using the command line interface". Prerequisites You have created a certificate authority and signed a certificate. Procedure Start the Quay container in configuration mode: In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created. Validate and download the updated configuration. Stop the Quay container and then restart the registry by entering the following command: 4.5. Testing the SSL/TLS configuration using the CLI Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 4.6. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 4.7. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 4.8. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates .
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "cp ~/ssl.cert ~/ssl.key USDQUAY/config", "cd USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop quay", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.10.9 config secret", "sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9", "sudo podman login quay-server.example.com", "Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority", "sudo podman login --tls-verify=false quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/using-ssl-to-protect-quay
Images
Images OpenShift Container Platform 4.17 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324", "apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed", "oc edit configs.samples.operator.openshift.io/cluster", "apiVersion: samples.operator.openshift.io/v1 kind: Config", "oc tag -d <image_stream_name:tag>", "Deleted tag default/<image_stream_name:tag>.", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1", "BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\" --insecure=true 1", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y", "RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y", "FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile", "FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y", "RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory", "LABEL io.openshift.tags mongodb,mongodb24,nosql", "LABEL io.openshift.wants mongodb,redis", "LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support", "LABEL io.openshift.non-scalable true", "LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "s2i create <image_name> <destination_directory>", "IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run", "podman build -t <builder_image_name>", "docker build -t <builder_image_name>", "podman run <builder_image_name> .", "docker run <builder_image_name> .", "s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_", "podman run <output_application_image_name>", "docker run <output_application_image_name>", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "oc tag <source> <destination>", "oc tag ruby:2.0 ruby:static-2.0", "oc tag --alias=true <source> <destination>", "oc delete istag/ruby:latest", "oc tag -d ruby:latest", "<image_stream_name>:<tag>", "<image_stream_name>@<id>", "openshift/ruby-20-centos7:2.0", "registry.redhat.io/rhel7:latest", "centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e", "oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b", "oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "oc get serviceaccount default -o yaml", "apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: \"2025-03-03T20:07:52Z\" name: default namespace: default resourceVersion: \"13914\" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name>", "apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name>", "apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name>", "oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso", "oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5", "<image-stream-name>@<image-id>", "origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest", "<imagestream name>:<tag>", "origin-ruby-sample:latest", "apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: \"1001\" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: \"1001\" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: \"1.0\" kind: DockerImage dockerImageMetadataVersion: \"1.0\" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "oc describe is/<image-name>", "oc describe is/python", "Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago", "oc describe istag/<image-stream>:<tag-name>", "oc describe istag/python:latest", "Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801", "oc get istag <image-stream-tag> -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"", "oc get istag busybox:latest -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"", "linux/amd64 linux/arm linux/arm64 linux/386 linux/mips64le linux/ppc64le linux/riscv64 linux/s390x", "oc tag <image-name:tag1> <image-name:tag2>", "oc tag python:3.5 python:latest", "Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.", "oc describe is/python", "Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago", "oc tag <repository/image> <image-name:tag>", "oc tag docker.io/python:3.6.0 python:3.6", "Tag python:3.6 set to docker.io/python:3.6.0.", "oc tag <image-name:tag> <image-name:latest>", "oc tag python:3.6 python:latest", "Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.", "oc tag -d <image-name:tag>", "oc tag -d python:3.6", "Deleted tag default/python:3.6", "oc tag <repository/image> <image-name:tag> --scheduled", "oc tag docker.io/python:3.6.0 python:3.6 --scheduled", "Tag python:3.6 set to import docker.io/python:3.6.0 periodically.", "oc tag <repositiory/image> <image-name:tag>", "oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson", "oc import-image <imagestreamtag> --from=<image> --confirm", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --reference-policy=local --confirm", "--- Arch: <none> Manifests: linux/amd64 sha256:6e325b86566fafd3c4683a05a219c30c421fbccbf8d87ab9d20d4ec1131c3451 linux/arm64 sha256:d8fad562ffa75b96212c4a6dc81faf327d67714ed85475bf642729703a2b5bf6 linux/ppc64le sha256:7b7e25338e40d8bdeb1b28e37fef5e64f0afd412530b257f5b02b30851f416e1 ---", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='Legacy' --confirm", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --scheduled=true", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --insecure=true", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name>", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal'", "oc set image-lookup mysql", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true", "oc set image-lookup imagestream --list", "oc set image-lookup deploy/mysql", "apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql", "oc set image-lookup deploy/mysql --enabled=false", "apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, # ]", "oc set triggers deploy/example --from-image=example:latest -c web", "apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"example:latest\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"container\\\")].image\"}]'", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.30.3 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.30.3 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.30.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.30.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.30.3 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.30.3", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# cat /etc/containers/policy.json | jq '.'", "{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }", "spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# cat etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload", "[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"", "oc edit image.config.openshift.io cluster", "spec: registrySources: blockedRegistries: - quay.io/openshift-payload", "[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf", "unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal", "apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf", "oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>", "oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files", "wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml", "oc create -f <path_to_the_directory>/<file-name>.yaml", "podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7", "image:///usr/libexec/s2i", "#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc", "#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/images/index